Content deleted Content added
→The image shows just a normal encoder: Better title |
|||
Line 49:
Clearly, the latter makes sense, since it is the very goal to learn <math>\theta</math> through the probabilistic decoder as generative model for the likelihood <math>p_\theta(x\mid z)</math>.
So is there a deeper meaning or sense in parametrizing the prior as <math>p_\theta(z)</math> as well, with the very same parameters <math>\theta</math> as the likelihood, or is it in fact a typo/mistake? <!-- Template:Unsigned IP --><small class="autosigned">— Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/46.223.162.38|46.223.162.38]] ([[User talk:46.223.162.38#top|talk]]) 22:11, 11 October 2021 (UTC)</small> <!--Autosigned by SineBot-->
The prior is not dependent on the paramterers <math>\theta</math>, but rather on a different set of parameters <math>\phi</math>.
== The image shows just a normal autoencoder, not a variational autoencoder ==
|