An auto-encoder is an artificial neural network used for learning efficient codings. The aim of an auto-encoder is to learn a compressed representation (encoding) for a set of data. Auto-encoders use three layers:
- An input layer. For example, in a face recognition task, the neurons in the input layer could map to pixels in the photograph.
- A number of considerably smaller hidden layers, which will form the encoding.
- An output layer, where each neuron has the same meaning as in the input layer.
If linear neurons are used, then an auto-encoder is very similar to PCA.
Training
An auto-encoder is often trained using one of the many Backpropagation variants ( Conjugate Gradient Method, Steepest Descent, etc.)
A technique developed Geoffrey Hinton for training many-layered "deep" auto-encoders involves treating each set of two layers like a Restricted Boltzmann Machine for pre-training to approximate a good solution and then using a backpropagation technique to fine-tune.
External links
- Presentation introducing auto-encoders for number recognition
- Reducing the Dimensionality of Data with Neural Networks (Science, 28 July 2006, Hinton & Salakhutdinov)
This article has not been added to any content categories. Please help out by adding categories to it so that it can be listed with similar articles, in addition to a stub category. (December 2007) |