Talk:Variational autoencoder: Difference between revisions

Content deleted Content added
SineBot (talk | contribs)
Line 48:
On the other hand, the article as well as [Kingma&Welling2014] both parametrize the prior <math>p_\theta(z)</math> with <math>\theta</math>, just as the likelihood <math>p_\theta(x\mid z)</math>.
Clearly, the latter makes sense, since it is the very goal to learn <math>\theta</math> through the probabilistic decoder as generative model for the likelihood <math>p_\theta(x\mid z)</math>.
So is there a deeper meaning or sense in parametrizing the prior as <math>p_\theta(z)</math> as well, with the very same parameters <math>\theta</math> as the likelihood, or is it in fact a typo/mistake? <!-- Template:Unsigned IP --><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/46.223.162.38|46.223.162.38]] ([[User talk:46.223.162.38#top|talk]]) 22:11, 11 October 2021 (UTC)</small> <!--Autosigned by SineBot-->