Content deleted Content added
No edit summary |
No edit summary |
||
Line 17:
A pretraining technique developed by [[Geoffrey Hinton]] for training many-layered "deep" auto-encoders involves treating each neighboring set of two layers like a [[Boltzmann machine#Restricted Boltzmann Machine|Restricted Boltzmann Machine]] for pre-training to approximate a good solution and then using a backpropagation technique to fine-tune.
High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such “autoencoder” networks, but this works well only if the initial weights are close to a good solution. There are effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data.
== External links ==
|