Talk:Variational autoencoder: Difference between revisions

Content deleted Content added
No edit summary
Line 42:
 
[[User:Ettmajor|Ettmajor]] ([[User talk:Ettmajor|talk]]) 10:06, 11 July 2021 (UTC)
 
== Does the prior <math>p(z)</math> depend on <math>\theta</math> or not? ==
 
In a vanilla Gaussian VAE, the prior follows a standard Gaussian with zero mean and unit variance, i.e., there is no parametrization (<math>\theta</math> or whatsoever) concerning the prior <math>p(z)</math> of the latent representations.
On the other hand, the article as well as [Kingma&Welling2014] both parametrize the prior <math>p_\theta(z)</math> with <math>\theta</math>, just as the likelihood <math>p_\theta(x\mid z)</math>.
Clearly, the latter makes sense, since it is the very goal to learn <math>\theta</math> through the probabilistic decoder as generative model for the likelihood <math>p_\theta(x\mid z)</math>.
So is there a deeper meaning or sense in parametrizing the prior as <math>p_\theta(z)</math> as well, with the very same parameters <math>\theta</math> as the likelihood, or is it in fact a typo/mistake?