Content deleted Content added
No edit summary |
No edit summary |
||
Line 19:
: <math>\log p_K(z_K) = \log p_0(z_0) - \sum_{i=1}^{K} \log \left|\det \frac{df_i(z_{i-1})}{dz_{i-1}}\right|</math>
To efficiently compute the log likelihood, the functions <math>f_1, ..., f_K</math> should be 1. easy to invert, and 2. easy to compute the determinant of its Jacobian. In practice, the functions <math>f_1, ..., f_K</math> are modeled using [[Deep learning|deep neural networks]], and are trained to minimize the negative log-likelihood of data samples from the target distribution. These architectures are usually designed such that only the forward pass of the neural network is required in both the inverse and the Jacobian determinant calculations. Examples of such architectures include NICE<ref>{{cite arXiv | eprint=1410.8516}}</ref>, RealNVP<ref>{{cite arXiv | eprint=1605.08803}}</ref>, and Glow<ref name="glow">{{cite arXiv | eprint=1807.03039}}</ref>.
=== Derivation of log likelihood ===
Line 49:
== Applications ==
Flow-based generative models have been applied on a variety of modeling tasks, including:
* Audio generation<ref>{{cite arXiv | eprint=1912.01219}}</ref>
*
* Molecular graph generation<ref>{{cite arXiv | eprint=2001.09382}}</ref>
* Point-cloud modeling<ref>{{cite arXiv | eprint=1906.12320}}</ref>
* Video generation<ref>{{cite arXiv | eprint=1903.01434}}</ref>
== References ==
|