Autoencoder

This is an old revision of this page, as edited by Warrior4321 (talk | contribs) at 15:23, 26 December 2007. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

An auto-encoder is an artificial neural network used for learning efficient codings. The aim of an auto-encoder is to learn a compressed representation (encoding) for a set of data. Auto-encoders use three layers:

  • An input layer. For example, in a face recognition task, the neurons in the input layer could map to pixels in the photograph.
  • A number of considerably smaller hidden layers, which will form the encoding.
  • An output layer, where each neuron has the same meaning as in the input layer.

If linear neurons are used, then an auto-encoder is very similar to PCA.

Training

An auto-encoder is often trained using one of the many Backpropagation variants ( Conjugate Gradient Method, Steepest Descent, etc.)

A technique developed Geoffrey Hinton for training many-layered "deep" auto-encoders involves treating each set of two layers like a Restricted Boltzmann Machine for pre-training to approximate a good solution and then using a backpropagation technique to fine-tune.