A flow-based generative model is a generative model used in machine learning that explicitly models a probability distribution by leveraging normalizing flow,[1] which is a statistical method using the change-of-variable law of probabilities to transform a simple distribution into a complex one.
The direct modeling of likelihood provides many advantages. For example, the negative log-likelihood can be directly computed and minimized as the loss function. Additionally, novel samples can be generated by sampling from the initial distribution, and applying the flow transformation.
In contrast, many alternative generative modeling methods such as variational autoencoder (VAE) and generative adversarial network do not explicitly represent the likelihood function.
Method
Let be a (possibly multivariate) random variable with distribution .
For , let be a sequence of random variables transformed from . The functions should be invertible, i.e. the inverse function exists. The final output models the target distribution.
The log likelihood of is (see derivation):
To efficiently compute the log likelihood, the functions should be 1. easy to invert, and 2. easy to compute the determinant of its Jacobian. In practice, the functions are modeled using deep neural networks, and are trained to minimize the negative log-likelihood of data samples from the target distribution. These architectures are usually designed such that only the forward pass of the neural network is required in both the inverse and the Jacobian determinant calculations. Examples of such architectures include NICE,[2] RealNVP,[3] and Glow.[4]
Derivation of log likelihood
Consider and . Note that .
By the change of variable formula, the distribution of is:
Where is the determinant of the Jacobian matrix of .
By the inverse function theorem:
By the identity (where is an invertible matrix), we have:
The log likelihood is thus:
In general, the above applies to any and . Since is equal to subtracted by a non-recursive term, we can infer by induction that:
Training method
Flow-based models are generally trained by maximum likelihood. A pseudocode is as follows:[5]
- INPUT. dataset , normalizing flow model .
- SOLVE. by gradient descent
- RETURN.
Variants
Planar Flow
The earliest example.[1] Fix some activation function , and let with th appropriate dimensions, then The inverse has no closed-form solution in general.
The Jacobian is .
For it to be invertible everywhere, it must be nonzero everywhere. For example, and satisfies the requirement.
Nonlinear Independent Components Estimation (NICE)
Let be even-dimensional, and split them in the middle.[2] Then the normalizing flow functions are where is any neural network with weights .
is just , and the Jacobian is just 1, that is, the flow is volume-preserving.
When , this is seen as a curvy shearing along the direction.
Real Non-Volume Preserving (Real NVP)
[3] Its inverse is , and its Jacobian is .
Since the Real NVP map keeps the first and second halves of the vector separate, it's usually required to add a permutation after every Real NVP layer.
Generative Flow (Glow)
[4] Each layer of Glow has 3 parts:
- channel-wise affine transform with Jacobian .
- invertible 1x1 convolution with Jacobian . Here is an invertible matrix.
- Real NVP, with Jacobian as described in Real NVP.
The idea of using the invertible 1x1 convolution is to permute all layers in general, instead of merely permuting the first and second half, as in Real NVP.
masked autoregressive flow (MAF)
[6] An autoregressive model of a distribution on is defined as the following stochastic process:
where and are fixed functions that define the autoregressive model.
By the reparametrization trick, the autoregressive model is generalized to a normalizing flow: The autoregressive model is recovered by setting .
The forward mapping is slow (it's sequential), but the backward mapping is fast (it's parallel).
The Jacobian matrix is lower-diagonal, so the Jacobian is .
Reversing the two maps and of MAF results in Inverse Autoregressive Flow (IAF)[7].
Continuous Normalizing Flow (CNF)
Instead of constructing flow by function composition, another approach is to formulate the flow as a continuous-time dynamic.[8] Let be the latent variable with distribution . Map this latent variable to data space with the following flow function:
Where is an arbitrary function and can be modeled with e.g. neural networks.
The inverse function is then naturally:[8]
And the log-likelihood of can be found as:[8]
The trace can be estimated by "Hutchinson's trick"[9]:
Given any matrix , and any random with , we have . (Proof: expand the expectation directly.)
Usually, the random vector is sampled from (normal distribution) or (Radamacher distribution).
Because of the use of integration, techniques such as Neural ODE[10] may be needed in practice.
Applications
Flow-based generative models have been applied on a variety of modeling tasks, including:
References
- ^ a b Danilo Jimenez Rezende; Mohamed, Shakir (2015). "Variational Inference with Normalizing Flows". arXiv:1505.05770 [stat.ML].
- ^ a b Dinh, Laurent; Krueger, David; Bengio, Yoshua (2014). "NICE: Non-linear Independent Components Estimation". arXiv:1410.8516 [cs.LG].
- ^ a b Dinh, Laurent; Sohl-Dickstein, Jascha; Bengio, Samy (2016). "Density estimation using Real NVP". arXiv:1605.08803 [cs.LG].
- ^ a b c Kingma, Diederik P.; Dhariwal, Prafulla (2018). "Glow: Generative Flow with Invertible 1x1 Convolutions". arXiv:1807.03039 [stat.ML].
- ^ Kobyzev, Ivan; Prince, Simon J.D.; Brubaker, Marcus A. (2021-11). "Normalizing Flows: An Introduction and Review of Current Methods". IEEE Transactions on Pattern Analysis and Machine Intelligence. 43 (11): 3964–3979. doi:10.1109/TPAMI.2020.2992934. ISSN 1939-3539.
{{cite journal}}
: Check date values in:|date=
(help) - ^ Papamakarios, George; Pavlakou, Theo; Murray, Iain (2017). "Masked Autoregressive Flow for Density Estimation". Advances in Neural Information Processing Systems. 30. Curran Associates, Inc.
- ^ Kingma, Durk P; Salimans, Tim; Jozefowicz, Rafal; Chen, Xi; Sutskever, Ilya; Welling, Max (2016). "Improved Variational Inference with Inverse Autoregressive Flow". Advances in Neural Information Processing Systems. 29. Curran Associates, Inc.
- ^ a b c Grathwohl, Will; Chen, Ricky T. Q.; Bettencourt, Jesse; Sutskever, Ilya; Duvenaud, David (2018). "FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models". arXiv:1810.01367 [cs.LG].
- ^ Hutchinson, M.F. (1989-01). "A Stochastic Estimator of the Trace of the Influence Matrix for Laplacian Smoothing Splines". Communications in Statistics - Simulation and Computation. 18 (3): 1059–1076. doi:10.1080/03610918908812806. ISSN 0361-0918.
{{cite journal}}
: Check date values in:|date=
(help) - ^ Chen, Ricky T. Q.; Rubanova, Yulia; Bettencourt, Jesse; Duvenaud, David (2018). "Neural Ordinary Differential Equations". arXiv:1806.07366 [cs.LG].
- ^ Ping, Wei; Peng, Kainan; Zhao, Kexin; Song, Zhao (2019). "WaveFlow: A Compact Flow-based Model for Raw Audio". arXiv:1912.01219 [cs.SD].
- ^ Shi, Chence; Xu, Minkai; Zhu, Zhaocheng; Zhang, Weinan; Zhang, Ming; Tang, Jian (2020). "GraphAF: A Flow-based Autoregressive Model for Molecular Graph Generation". arXiv:2001.09382 [cs.LG].
- ^ Yang, Guandao; Huang, Xun; Hao, Zekun; Liu, Ming-Yu; Belongie, Serge; Hariharan, Bharath (2019). "PointFlow: 3D Point Cloud Generation with Continuous Normalizing Flows". arXiv:1906.12320 [cs.CV].
- ^ Kumar, Manoj; Babaeizadeh, Mohammad; Erhan, Dumitru; Finn, Chelsea; Levine, Sergey; Dinh, Laurent; Kingma, Durk (2019). "VideoFlow: A Conditional Flow-Based Model for Stochastic Video Generation". arXiv:1903.01434 [cs.CV].