Content deleted Content added
Tag: Reverted |
|||
Line 47:
Various techniques exist to prevent autoencoders from learning the [[identity function]] and to improve their ability to capture important information and learn richer representations.
====Sparse autoencoder
[[File:Autoencoder sparso.png|thumb|Simple schema of a single-layer sparse autoencoder. The hidden nodes in bright yellow are activated, while the light yellow ones are inactive. The activation depends on the input.]]
Learning [[Representation learning|representations]] in a way that encourages sparsity improves performance on classification tasks.<ref name=":5">{{Cite journal|last1=Frey|first1=Brendan|last2=Makhzani|first2=Alireza|date=2013-12-19|title=k-Sparse Autoencoders|arxiv=1312.5663|bibcode=2013arXiv1312.5663M}}</ref> Sparse autoencoders may include more (rather than fewer) hidden units than inputs, but only a small number of the hidden units are allowed to be active at the same time (thus, sparse).<ref name="domingos" /> This constraint forces the model to respond to the unique statistical features of the training data.
|