Variational autoencoder: Difference between revisions

Content deleted Content added
Fix formatting.
Stochastic gradient descend has nothing to do with taking expectations. Undid revision 1280088605 by G.S.Ray (talk)
Line 110:
We obtain the final formula for the loss:
<math display="block"> L_{\theta,\phi} = \mathbb{E}_{x \sim \mathbb{P}^{real}} \left[ \|x - D_\theta(E_\phi(x))\|_2^2\right]
+d \left( \mu(dz), E_\phi \sharp \mathbb{P}^{real} \right)^2</math>
+d \left( \mu(dz), E_\phi \sharp \mathbb{P}^{real} \right)^2</math><math>d</math> must have certain properties depending on the type of algorithm used to mimize this loss function. For example, it has to be expressable as an expectation if it is to be optimized by a [[Stochastic gradient descent|stochastic optimization algorithm]]. Several distances can be chosen and this has given rise to several flavors of VAEs:
 
+dThe \left(statistical \mu(dz),distance E_\phi \sharp \mathbb{P}^{real} \right)^2</math><math>d</math> mustrequires have certainspecial properties, dependingfor oninstance theit type of algorithm usedhas to mimizebe thisposses lossa function.formula Foras example,expectation itbecause hasthe toloss befunction expressable as an expectation if itwill isneed to be optimized by a [[Stochastic gradient descent|stochastic optimization algorithmalgorithms]]. Several distances can be chosen and this has givengave rise to several flavors of VAEs:
* the sliced Wasserstein distance used by S Kolouri, et al. in their VAE<ref>{{Cite conference |last1=Kolouri |first1=Soheil |last2=Pope |first2=Phillip E. |last3=Martin |first3=Charles E. |last4=Rohde |first4=Gustavo K. |date=2019 |title=Sliced Wasserstein Auto-Encoders |url=https://openreview.net/forum?id=H1xaJn05FQ |conference=International Conference on Learning Representations |publisher=ICPR |book-title=International Conference on Learning Representations}}</ref>
* the [[energy distance]] implemented in the Radon Sobolev Variational Auto-Encoder<ref>{{Cite journal |last=Turinici |first=Gabriel |year=2021 |title=Radon-Sobolev Variational Auto-Encoders |url=https://www.sciencedirect.com/science/article/pii/S0893608021001556 |journal=Neural Networks |volume=141 |pages=294–305 |arxiv=1911.13135 |doi=10.1016/j.neunet.2021.04.018 |issn=0893-6080 |pmid=33933889}}</ref>