Content deleted Content added
→Method: Added |
Citation bot (talk | contribs) Added bibcode. Removed URL that duplicated identifier. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 918/1032 |
||
Line 69:
In other words, minimizing the [[Kullback–Leibler divergence]] between the model's likelihood and the target distribution is equivalent to [[Maximum likelihood estimation|maximizing the model likelihood]] under observed samples of the target distribution.<ref>{{Cite journal |last1=Papamakarios |first1=George |last2=Nalisnick |first2=Eric |last3=Rezende |first3=Danilo Jimenez |last4=Shakir |first4=Mohamed |last5=Balaji |first5=Lakshminarayanan |date=March 2021 |title=Normalizing Flows for Probabilistic Modeling and Inference |journal=Journal of Machine Learning Research |url=https://jmlr.org/papers/v22/19-1028.html |volume=22 |issue=57 |pages=1–64 |arxiv=1912.02762}}</ref>
A pseudocode for training normalizing flows is as follows:<ref>{{Cite journal |last1=Kobyzev |first1=Ivan |last2=Prince |first2=Simon J.D. |last3=Brubaker |first3=Marcus A. |date=November 2021 |title=Normalizing Flows: An Introduction and Review of Current Methods
* INPUT. dataset <math>x_{1:n}</math>, normalizing flow model <math>f_\theta(\cdot), p_0 </math>.
|