Content deleted Content added
Fix typo |
add ref: In machine learning, a '''variational autoencoder''' ('''VAE''') is an artificial neural network architecture introduced by Diederik P. Kingma and Max Welling.<ref>{{Citation |last=Kingma |first=Diederik P. |title=Auto-Encoding Variational Bayes |date=2022-12-10 |url=http://arxiv.org/abs/1312.6114 |access-date=2024-06-12 |doi=10.48550/arXiv.1312.6114 |last2=Welling |first2=Max}}</ref> |
||
Line 6:
{{Machine learning bar}}
In [[machine learning]], a '''variational autoencoder''' ('''VAE''') is an [[artificial neural network]] architecture introduced by Diederik P. Kingma and [[Max Welling]].<ref>{{Citation |last=Kingma |first=Diederik P. |title=Auto-Encoding Variational Bayes |date=2022-12-10 |url=http://arxiv.org/abs/1312.6114 |access-date=2024-06-12 |doi=10.48550/arXiv.1312.6114 |last2=Welling |first2=Max}}</ref> It is part of the families of [[graphical model|probabilistic graphical models]] and [[variational Bayesian methods]].<ref>{{cite book |first1=Lucas |last1=Pinheiro Cinelli |first2=Matheus |last2=Araújo Marins |first3=Eduardo Antônio |last3=Barros da Silva |first4=Sérgio |last4=Lima Netto |display-authors=1 |title=Variational Methods for Machine Learning with Applications to Deep Networks |___location= |publisher=Springer |year=2021 |pages=111–149 |chapter=Variational Autoencoder |isbn=978-3-030-70681-4 |chapter-url=https://books.google.com/books?id=N5EtEAAAQBAJ&pg=PA111 |doi=10.1007/978-3-030-70679-1_5 |s2cid=240802776 }}</ref>
In addition to being seen as an [[autoencoder]] neural network architecture, variational autoencoders can also be studied within the mathematical formulation of [[variational Bayesian methods]], connecting a neural encoder network to its decoder through a probabilistic [[latent space]] (for example, as a [[multivariate Gaussian distribution]]) that corresponds to the parameters of a variational distribution.
|