Autoencoder: Difference between revisions

Content deleted Content added
KolbertBot (talk | contribs)
m Bot: HTTP→HTTPS (v481)
Tinybike (talk | contribs)
Structure: Fixed typo
Tags: canned edit summary Mobile edit Mobile web edit
Line 5:
 
==Structure==
Architecturally, the simplest form of an autoencoder is a feedforward, non-recurrent neural network very similar to the [[multilayer perceptron]] (MLP) – having an input layer, an output layer and one or more hidden layers connecting them –, but with the output layer having the same number of nodes as the input layer, and with the purpose of ''reconstructing'' its own inputs (instead of predicting the target value <math>Y</math> given inputs <math>X</math>). Therefore, autoencoders are [[unsupervised learning]] models.
 
An autoencoder always consists of two parts, the encoder and the decoder, which can be defined as transitions <math>\phi</math> and <math>\psi,</math> such that: