Variational autoencoder: Difference between revisions

Content deleted Content added
Line 51:
{{Main|Evidence lower bound}}
 
AsLike in everymany [[deep learning]] problem,approaches itthat isuse necessarygradient-based tooptimization, defineVAEs require a differentiable loss function in order to update the network weights through [[backpropagation]].
 
For variational autoencoders, the idea is to jointly optimize the generative model parameters <math>\theta</math> to reduce the reconstruction error between the input and the output, and <math>\phi</math> to make <math>q_\phi({z| x})</math> as close as possible to <math>p_\theta(z|x)</math>. As reconstruction loss, [[mean squared error]] and [[cross entropy]] are often used.