Content deleted Content added
10mmsocket (talk | contribs) Reverted 1 edit by Yitzhak Peleg (talk): Language change not needed |
No edit summary Tags: Reverted references removed Visual edit Mobile edit Mobile web edit |
||
Line 9:
{{Toclimit|3}}
==
An
The simplest way to perform the copying task perfectly would be to duplicate the signal. Instead,
The idea al application was dimensionality reduction or , but the concept became widely used for learning of data Some of the most powerful in the 2010s involved stacked inside
The simplest form of an autder is an input layer and an output layer ted by one or more hidden layers. The output layer has as the input layer. Its purpose is to reconstruct its of predicting a tar
:
:
:
In the simplest case, given one hidden layer, the encoder stage of an autoencoder takes the input <math>\mathbf{x} \in \mathbb{R}^d = \mathcal{X}</math> and maps it to <math>\mathbf{h} \in \mathbb{R}^p = \mathcal{F}</math>:
Line 26:
:<math>\mathbf{h} = \sigma(\mathbf{Wx}+\mathbf{b})</math>
This image <math>\mathbf{h}</math> is usually
:<math>\mathbf{x'} = \sigma'(\mathbf{W'h}+\mathbf{b'})</math>
Line 49:
====Sparse autoencoder (SAE)====
[[File:Autoencoder sparso.png|thumb|Simple schema of a single-layer sparse autoencoder. The hidden nodes in bright yellow are activated, while the light yellow ones are inactive. The activation depends on the input.]]
Learning [[Representation learning|representations]] in a way that encourages sparsity improves performance on classification tasks.<ref name=":5">{{Cite journal|last1=Frey|first1=Brendan|last2=Makhzani|first2=Alireza|date=2013-12-19|title=k-Sparse Autoencoders|arxiv=1312.5663|bibcode=2013arXiv1312.5663M}}</ref> Sparse autoencoders may include more (rather than fewer) hidden units than inputs, but only a small number of the hidden units are allowed to be active at the same time (thus, sparse).<ref name="domingos">{{cite book|last1=Domingos|first1=Pedro|title=The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World|title-link=The Master Algorithm|date=2015|publisher=Basic Books|isbn=978-046506192-1|at="Deeper into the Brain" subsection|chapter=4|author-link=Pedro Domingos}}</ref> This constraint forces the model to respond to the unique statistical features of the training data.
Specifically, a sparse autoencoder is an autoencoder whose training criterion involves a sparsity penalty <math>\Omega(\boldsymbol h)</math> on the code layer <math>\boldsymbol h</math>.
|