Autoencoder: Difference between revisions

Content deleted Content added
Training an autoencoder: Fix a mangled sentence
m typos, minor improvements (to VAE section)
Line 109:
{{Main|Variational autoencoder}}
 
[[Variational autoencoder]]s (VAEs) belong to the families of [[variational Bayesian methods]]. Despite the architectural similarities with basic autoencoders, VAEs are architecturearchitected with different goals and withhave a completely different mathematical formulation. The latent space is, in this case, composed byof a mixture of distributions instead of a fixed vectorvectors.
 
Given an input dataset <math>x</math> characterized by an unknown probability function <math>P(x)</math> and a multivariate latent encoding vector <math>z</math>, the objective is to model the data as a distribution <math>p_\theta(x)</math>, with <math>\theta</math> defined as the set of the network parameters so that <math>p_\theta(x) = \int_{z}p_\theta(x,z)dz </math>.