Content deleted Content added
m moved Autoencoder to Auto-encoder: more common spelling |
No edit summary |
||
Line 1:
An auto-encoder is an [[artificial neural network]] used for learning efficient codings. ▼
== Introduction ==
▲An '''auto-encoder''' is an [[artificial neural network]] used for learning efficient codings.
The aim of an auto-encoder is to learn a compressed representation (encoding) for a set of data.
Auto-encoders use three layers:
* An input layer. For example, in a face recognition task, the neurons in the input layer could map to pixels in the photograph.
* A number of considerably smaller hidden
* An output layer, where each neuron has the same meaning as in the input layer.
If linear neurons are used, then an auto-encoder is very similar to [[principal components analysis|PCA]].
== Training ==
An auto-encoder is often trained using one of the many backpropagation variants (Conjugate Gradient Method, Steepest Descen, etc.)
A technique developed Geoffrey Hinton for training many-layered "deep" auto-encoders involves treating each set of two layers like a [[Restricted Boltzmann Machine]] for pre-training to approximate a good solution and then using a backpropagation technique to fine-tune.
== External links ==
|