Content deleted Content added
m →Training: "miniscule" --> "minuscule" |
No edit summary |
||
Line 16:
An auto-encoder is often trained using one of the many [[Backpropagation]] variants ([[Conjugate gradient| Conjugate Gradient Method]], [[Steepest Descent]], etc.) Though often reasonably effective, there is fundamental problem with using backpropagation to train such a deep network. Once the errors get backpropagated to the first few layers, they are minuscule, and quite ineffectual. This causes the network to almost always learn to reconstruct the average of all the training data. Though more advanced backpropagation methods (such as the Conjugate Gradient Method) help with this to some degree, it still results in very slow learning and poor solutions. This problem is remedied by using initial weights that approximate the final solution. The process to find these initial weights is often called pretraining.
A pretraining technique developed by [[Geoffrey Hinton]] for training many-layered "deep" auto-encoders involves treating each set of two layers like a [[Boltzmann machine#Restricted Boltzmann Machine|Restricted Boltzmann Machine]] for pre-training to approximate a good solution and then using a backpropagation technique to fine-tune.
== External links ==
|