Variational autoencoder: Difference between revisions

Content deleted Content added
Evidence lower bound (ELBO): pedantic wording change
Tag: Reverted
Line 51:
{{Main|Evidence lower bound}}
 
Like many [[deep learning]] approachesimplementations, thattraining usetypically gradient-based optimization, VAEs requireuses a differentiable loss function to update the network weights throughand [[backpropagation]].
 
For variational autoencoders, the idea is to jointly optimize the generative model parameters <math>\theta</math> to reduce the reconstruction error between the input and the output, and <math>\phi</math> to make <math>q_\phi({z| x})</math> as close as possible to <math>p_\theta(z|x)</math>. As reconstruction loss, [[mean squared error]] and [[cross entropy]] are often used.