Variational autoencoder: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Alter: chapter-url, url. URLs might have been anonymized. Add: arxiv, authors 1-1. Removed or converted URL. Removed access-date with no URL. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | #UCB_toolbar
ce
Line 6:
{{Machine learning bar}}
 
In [[machine learning]], a '''variational autoencoder''' ('''VAE''') is an [[artificial neural network]] architecture introduced by Diederik P. Kingma and [[Max Welling]].<ref>{{Citationcite arxiv |last1=Kingma |first1=Diederik P. |title=Auto-Encoding Variational Bayes |date=2022-12-10 |last2=Welling |first2=Max|arxiv=1312.6114 }}</ref> It is part of the families of [[graphical model|probabilistic graphical models]] and [[variational Bayesian methods]].<ref>{{cite book |first1=Lucas |last1=Pinheiro Cinelli |first2=Matheus |last2=Araújo Marins |first3=Eduardo Antônio |last3=Barros da Silva |first4=Sérgio |last4=Lima Netto |display-authors=1 |title=Variational Methods for Machine Learning with Applications to Deep Networks |___location= |publisher=Springer |year=2021 |pages=111–149 |chapter=Variational Autoencoder |isbn=978-3-030-70681-4 |chapter-url=https://books.google.com/books?id=N5EtEAAAQBAJ&pg=PA111 |doi=10.1007/978-3-030-70679-1_5 |s2cid=240802776 }}</ref>
 
In addition to being seen as an [[autoencoder]] neural network architecture, variational autoencoders can also be studied within the mathematical formulation of [[variational Bayesian methods]], connecting a neural encoder network to its decoder through a probabilistic [[latent space]] (for example, as a [[multivariate Gaussian distribution]]) that corresponds to the parameters of a variational distribution.