Flow-based generative model: Difference between revisions

Content deleted Content added
TommyX12 (talk | contribs)
No edit summary
TommyX12 (talk | contribs)
No edit summary
Line 19:
: <math>\log p_K(z_K) = \log p_0(z_0) - \sum_{i=1}^{K} \log \left|\det \frac{df_i(z_{i-1})}{dz_{i-1}}\right|</math>
 
To efficiently compute the log likelihood, the functions <math>f_1, ..., f_K</math> should be 1. easy to invert, and 2. easy to compute the determinant of its Jacobian. In practice, the functions <math>f_1, ..., f_K</math> are modeled using [[Deep learning|deep neural networks]], and are trained to minimize the negative log-likelihood givenof data samples from the target distribution. These architectures are usually designed such that only the forward pass of the neural network is required in both the inverse and the Jacobian determinant calculations. Examples of such architectures include NICE<ref>{{cite arXiv | eprint=1410.8516}}</ref>, RealNVP<ref>{{cite arXiv | eprint=1605.08803}}</ref>, and Glow<ref>{{cite arXiv | eprint=1807.03039}}</ref>.
 
=== Derivation of log likelihood ===
Line 46:
 
: <math>\log p_K(z_K) = \log p_0(z_0) - \sum_{i=1}^{K} \log \left|\det \frac{df_i(z_{i-1})}{dz_{i-1}}\right|</math>
 
== Examples ==
 
TODO description
 
* RealNVP
* TODO more items, and citation
 
== Applications ==
Line 68 ⟶ 61:
== External links ==
* [https://lilianweng.github.io/lil-log/2018/10/13/flow-based-deep-generative-models.html Flow-based Deep Generative Models]
* [https://deepgenerativemodels.github.io/notes/flow/ Normalizing flow models]
 
[[:Category:Machine learning]]