Autoencoder: Difference between revisions

Content deleted Content added
Vimessi77 (talk | contribs)
No edit summary
Sagan (talk | contribs)
Line 20:
potential in machine learning. In a series of simulation studies using benchmarks problems from the UCI database, the divergent autoencoder showed learning and generalization performance comparable to state-of-the-art algorithms with several major advantages: no evidence of overfitting, low sensitivity to parameter settings, and fast runtimes.
 
==Methods to increase capacirycapacity==
Capacity in the sense of the variety of patterns it can successfully learn. Obviously it's much harder to learn all the digits + the english alphabet than just one class, say only 0's. One constraint on this capacity is the number of units in the middle layer, and the number of layers in total. But by increasing both, learning gets more difficult. Since any trained auto-encoder will correspond to some single minima of the energy landscape. Even if this is the global minima, it may not be enough, I fear. So I am hoping that by changing the structure of the auto-encoder perhaps we can increase this capacity. Any tricks you've been using?
 
Line 38:
 
6.Stacking auto-encoders or RBMs.
 
== Training ==