Content deleted Content added
Tags: Reverted extraneous markup |
|||
Line 127:
== Applications ==
The two main applications of autoencoders are dimensionality reduction and information retrieval,<ref name=":0">{{Cite book|url=http://www.deeplearningbook.org|title=Deep Learning|last1=Goodfellow|first1=Ian|last2=Bengio|first2=Yoshua|last3=Courville|first3=Aaron|publisher=MIT Press|date=2016|isbn=978-0262035613}}</ref> but modern variations have been applied to other tasks.
'''Bold text'''''''Italic text''''''Italic text''''''Italic text''''''Italic text''''''Italic text''JT5I3PO690I''''''''''
=== Dimensionality reduction ===
[[File:PCA vs Linear Autoencoder.png|thumb|Plot of the first two Principal Components (left) and a two-dimension hidden layer of a Linear Autoencoder (Right) applied to the [[Fashion MNIST dataset]].<ref name=":10">{{Cite web|url=https://github.com/zalandoresearch/fashion-mnist|title=Fashion MNIST|website=[[GitHub]]|date=2019-07-12}}</ref> The two models being both linear learn to span the same subspace. The projection of the data points is indeed identical, apart from rotation of the subspace - to which PCA is invariant.]][[Dimensionality reduction]] was one of the first [[deep learning]] applications.<ref name=":0" />
|