Flow-based generative model: Difference between revisions

Content deleted Content added
TommyX12 (talk | contribs)
Created page with '{{subst:AfC submission/draftnew}}<!-- Important, do not remove this line before article has been created. --> == References == <!-- Inline citations added...'
 
Citation bot (talk | contribs)
Added bibcode. Removed URL that duplicated identifier. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 918/1032
 
(191 intermediate revisions by 40 users not shown)
Line 1:
{{Short description|Statistical model used in machine learning}}
{{AfC submission|t||ts=20210311024232|u=TommyX12|ns=118|demo=}}<!-- Important, do not remove this line before article has been created. -->
{{Machine learning bar}}
 
A '''flow-based generative model''' is a [[generative model]] used in [[machine learning]] that explicitly models a [[probability distribution]] by leveraging '''normalizing flow''',<ref>{{cite journal |last1=Tabak |first1=Esteban G. |last2=Vanden-Eijnden |first2=Eric |title=Density estimation by dual ascent of the log-likelihood |journal=Communications in Mathematical Sciences |date=2010 |volume=8 |issue=1 |pages=217–233 |doi=10.4310/CMS.2010.v8.n1.a11 |url=https://projecteuclid.org/journals/communications-in-mathematical-sciences/volume-8/issue-1/Density-estimation-by-dual-ascent-of-the-log-likelihood/cms/1266935020.full}}</ref><ref>{{cite journal |last1=Tabak |first1=Esteban G. |last2=Turner |first2=Cristina V. |title=A family of nonparametric density estimation algorithms |journal=Communications on Pure and Applied Mathematics |date=2012 |volume=66 |issue=2 |pages=145–164 |doi=10.1002/cpa.21423 |hdl=11336/8930 |s2cid=17820269 |url=https://onlinelibrary.wiley.com/doi/abs/10.1002/cpa.21423|hdl-access=free }}</ref><ref>{{cite journal |first1=George |last1=Papamakarios |first2=Eric |last2=Nalisnick |first3=Danilo |last3=Jimenez Rezende |first4=Shakir |last4=Mohamed |first5=Balaji |last5=Bakshminarayanan |year=2021 |title=Normalizing flows for probabilistic modeling and inference |journal=Journal of Machine Learning Research |volume=22 |issue=1 |pages=2617–2680 |arxiv=1912.02762 |url=https://dl.acm.org/doi/abs/10.5555/3546258.3546315 }}</ref> which is a statistical method using the [[Probability density function#Function of random variables and change of variables in the probability density function|change-of-variable]] law of probabilities to transform a simple distribution into a complex one.
 
The direct modeling of likelihood provides many advantages. For example, the negative log-likelihood can be directly computed and minimized as the [[loss function]]. Additionally, novel samples can be generated by sampling from the initial distribution, and applying the flow transformation.
 
In contrast, many alternative generative modeling methods such as [[Variational autoencoder|variational autoencoder (VAE)]] and [[generative adversarial network]] do not explicitly represent the likelihood function.
 
== Method ==
 
[[File:Normalizing-flow.svg|thumb|Scheme for normalizing flows]]
 
Let <math>z_0</math> be a (possibly multivariate) [[random variable]] with distribution <math>p_0(z_0)</math>.
 
For <math>i = 1, ..., K</math>, let <math>z_i = f_i(z_{i-1})</math> be a sequence of random variables transformed from <math>z_0</math>. The functions <math>f_1, ..., f_K</math> should be invertible, i.e. the [[inverse function]] <math>f^{-1}_i</math> exists. The final output <math>z_K</math> models the target distribution.
 
 
The log likelihood of <math>z_K</math> is (see [[#Derivation of log likelihood|derivation]]):
 
: <math>\log p_K(z_K) = \log p_0(z_0) - \sum_{i=1}^{K} \log \left|\det \frac{df_i(z_{i-1})}{dz_{i-1}}\right|</math>
 
 
Learning probability distributions by differentiating log Jacobians originated in the Infomax (maximum likelihood) approach to ICA,<ref>Bell, A. J.; Sejnowski, T. J. (1995). "[https://doi.org/10.1162/neco.1995.7.6.1129 An information-maximization approach to blind separation and blind deconvolution]". ''Neural Computation''. **7** (6): 1129–1159. doi:10.1162/neco.1995.7.6.1129.</ref> which forms a single-layer (K=1) flow-based model. Relatedly, the single layer precursor of conditional generative flows appeared in <ref>Roth, Z.; Baram, Y. (1996). "[https://doi.org/10.1109/72.536322 Multidimensional density shaping by sigmoids]". ''IEEE Transactions on Neural Networks''. **7** (5): 1291–1298. doi:10.1109/72.536322.</ref>.
 
To efficiently compute the log likelihood, the functions <math>f_1, ..., f_K</math> should be easily invertible, and the determinants of their Jacobians should be simple to compute. In practice, the functions <math>f_1, ..., f_K</math> are modeled using [[Deep learning|deep neural networks]], and are trained to minimize the negative log-likelihood of data samples from the target distribution. These architectures are usually designed such that only the forward pass of the neural network is required in both the inverse and the Jacobian determinant calculations. Examples of such architectures include NICE,<ref name=":1">{{cite arXiv | eprint=1410.8516| last1=Dinh| first1=Laurent| last2=Krueger| first2=David| last3=Bengio| first3=Yoshua| title=NICE: Non-linear Independent Components Estimation| year=2014| class=cs.LG}}</ref> RealNVP,<ref name=":2">{{cite arXiv | eprint=1605.08803| last1=Dinh| first1=Laurent| last2=Sohl-Dickstein| first2=Jascha| last3=Bengio| first3=Samy| title=Density estimation using Real NVP| year=2016| class=cs.LG}}</ref> and Glow.<ref name="glow">{{cite arXiv | eprint=1807.03039| last1=Kingma| first1=Diederik P.| last2=Dhariwal| first2=Prafulla| title=Glow: Generative Flow with Invertible 1x1 Convolutions| year=2018| class=stat.ML}}</ref>
 
=== Derivation of log likelihood ===
 
Consider <math>z_1</math> and <math>z_0</math>. Note that <math>z_0 = f^{-1}_1(z_1)</math>.
 
By the [[Probability density function#Function of random variables and change of variables in the probability density function|change of variable]] formula, the distribution of <math>z_1</math> is:
 
: <math>p_1(z_1) = p_0(z_0)\left|\det \frac{df_1^{-1}(z_1)}{dz_1}\right|</math>
 
Where <math>\det \frac{df_1^{-1}(z_1)}{dz_1}</math> is the [[determinant]] of the [[Jacobian matrix and determinant|Jacobian matrix]] of <math>f^{-1}_1</math>.
 
By the [[inverse function theorem]]:
 
: <math>p_1(z_1) = p_0(z_0)\left|\det \left(\frac{df_1(z_0)}{dz_0}\right)^{-1}\right|</math>
 
By the identity <math>\det(A^{-1}) = \det(A)^{-1}</math> (where <math>A</math> is an [[invertible matrix]]), we have:
 
: <math>p_1(z_1) = p_0(z_0)\left|\det \frac{df_1(z_0)}{dz_0}\right|^{-1}</math>
 
The log likelihood is thus:
 
: <math>\log p_1(z_1) = \log p_0(z_0) - \log \left|\det \frac{df_1(z_0)}{dz_0}\right|</math>
 
In general, the above applies to any <math>z_i</math> and <math>z_{i-1}</math>. Since <math>\log p_i(z_i)</math> is equal to <math>\log p_{i-1}(z_{i-1})</math> subtracted by a non-recursive term, we can infer by [[Mathematical induction|induction]] that:
 
: <math>\log p_K(z_K) = \log p_0(z_0) - \sum_{i=1}^{K} \log \left|\det \frac{df_i(z_{i-1})}{dz_{i-1}}\right|</math>
 
== Training method ==
As is generally done when training a deep learning model, the goal with normalizing flows is to minimize the [[Kullback–Leibler divergence]] between the model's likelihood and the target distribution to be estimated. Denoting <math>p_\theta</math> the model's likelihood and <math>p^*</math> the target distribution to learn, the (forward) KL-divergence is:
 
: <math>D_{\text{KL}}[p^{*}(x) \| p_{\theta}(x)] = -\mathop{\mathbb{E}}_{p^{*}(x)}[\log p_{\theta}(x)] + \mathop{\mathbb{E}}_{p^{*}(x)}[\log p^{*}(x) ]</math>
 
The second term on the right-hand side of the equation corresponds to the entropy of the target distribution and is independent of the parameter <math>\theta</math> we want the model to learn, which only leaves the expectation of the negative log-likelihood to minimize under the target distribution. This intractable term can be approximated with a Monte-Carlo method by [[importance sampling]]. Indeed, if we have a dataset <math>\{x_{i}\}_{i=1}^N</math> of samples each independently drawn from the target distribution <math>p^*(x)</math>, then this term can be estimated as:
 
: <math>-\hat{\mathop{\mathbb{E}}}_{p^{*}(x)}[\log p_{\theta}(x)] = -\frac{1}{N} \sum_{i=0}^{N} \log p_{\theta}(x_{i}) </math>
 
Therefore, the learning objective
 
: <math>\underset{\theta}{\operatorname{arg\,min}}\ D_{\text{KL}}[p^{*}(x) \| p_{\theta}(x)]</math>
is replaced by
: <math>\underset{\theta}{\operatorname{arg\,max}}\ \sum_{i=0}^{N} \log p_{\theta}(x_{i})</math>
 
In other words, minimizing the [[Kullback–Leibler divergence]] between the model's likelihood and the target distribution is equivalent to [[Maximum likelihood estimation|maximizing the model likelihood]] under observed samples of the target distribution.<ref>{{Cite journal |last1=Papamakarios |first1=George |last2=Nalisnick |first2=Eric |last3=Rezende |first3=Danilo Jimenez |last4=Shakir |first4=Mohamed |last5=Balaji |first5=Lakshminarayanan |date=March 2021 |title=Normalizing Flows for Probabilistic Modeling and Inference |journal=Journal of Machine Learning Research |url=https://jmlr.org/papers/v22/19-1028.html |volume=22 |issue=57 |pages=1–64 |arxiv=1912.02762}}</ref>
 
A pseudocode for training normalizing flows is as follows:<ref>{{Cite journal |last1=Kobyzev |first1=Ivan |last2=Prince |first2=Simon J.D. |last3=Brubaker |first3=Marcus A. |date=November 2021 |title=Normalizing Flows: An Introduction and Review of Current Methods |journal=IEEE Transactions on Pattern Analysis and Machine Intelligence |volume=43 |issue=11 |pages=3964–3979 |doi=10.1109/TPAMI.2020.2992934 |pmid=32396070 |arxiv=1908.09257 |bibcode=2021ITPAM..43.3964K |s2cid=208910764 |issn=1939-3539}}</ref>
 
* INPUT. dataset <math>x_{1:n}</math>, normalizing flow model <math>f_\theta(\cdot), p_0 </math>.
* SOLVE. <math>\max_\theta \sum_j \ln p_\theta(x_j)</math> by gradient descent
* RETURN. <math>\hat\theta</math>
 
== Variants ==
 
=== Planar Flow ===
The earliest example.<ref name=":0">{{cite arXiv | eprint=1505.05770| author1=Danilo Jimenez Rezende| last2=Mohamed| first2=Shakir| title=Variational Inference with Normalizing Flows| year=2015| class=stat.ML}}</ref> Fix some activation function <math>h</math>, and let <math>\theta = (u, w, b)</math> with the appropriate dimensions, then<math display="block">x = f_\theta(z) = z + u h(\langle w, z \rangle + b)</math>The inverse <math>f_\theta^{-1}</math> has no closed-form solution in general.
 
The Jacobian is <math>|\det (I + h'(\langle w, z \rangle + b) uw^T)| = |1 + h'(\langle w, z \rangle + b) \langle u, w\rangle|</math>.
 
For it to be invertible everywhere, it must be nonzero everywhere. For example, <math>h = \tanh</math> and <math>\langle u, w \rangle > -1</math> satisfies the requirement.
 
=== Nonlinear Independent Components Estimation (NICE) ===
Let <math>x, z\in \R^{2n}</math> be even-dimensional, and split them in the middle.<ref name=":1" /> Then the normalizing flow functions are<math display="block">x = \begin{bmatrix}
x_1 \\ x_2
\end{bmatrix}= f_\theta(z) = \begin{bmatrix}
z_1 \\z_2
\end{bmatrix} + \begin{bmatrix}
0 \\ m_\theta(z_1)
\end{bmatrix}</math>where <math>m_\theta</math> is any neural network with weights <math>\theta</math>.
 
<math>f_\theta^{-1}</math> is just <math>z_1 = x_1, z_2 = x_2 - m_\theta(x_1)</math>, and the Jacobian is just 1, that is, the flow is volume-preserving.
 
When <math>n=1</math>, this is seen as a curvy shearing along the <math>x_2</math> direction.
 
=== Real Non-Volume Preserving (Real NVP) ===
The Real Non-Volume Preserving model generalizes NICE model by:<ref name=":2" /><math display="block">x = \begin{bmatrix}
x_1 \\ x_2
\end{bmatrix}= f_\theta(z) = \begin{bmatrix}
z_1 \\ e^{s_\theta(z_1)} \odot z_2
\end{bmatrix} + \begin{bmatrix}
0 \\ m_\theta(z_1)
\end{bmatrix}</math>
 
Its inverse is <math>z_1 = x_1, z_2 = e^{-s_\theta (x_1)}\odot (x_2 - m_\theta (x_1))</math>, and its Jacobian is <math>\prod^n_{i=1} e^{s_\theta(z_{1, })}</math>. The NICE model is recovered by setting <math>s_\theta = 0</math>.
Since the Real NVP map keeps the first and second halves of the vector <math>x</math> separate, it's usually required to add a permutation <math>(x_1, x_2) \mapsto (x_2, x_1)</math> after every Real NVP layer.
 
=== Generative Flow (Glow) ===
In generative flow model,<ref name="glow" /> each layer has 3 parts:
 
* channel-wise affine transform<math display="block">y_{cij} = s_c(x_{cij} + b_c)</math>with Jacobian <math>\prod_c s_c^{HW}</math>.
* invertible 1x1 convolution<math display="block">z_{cij} = \sum_{c'} K_{cc'} y_{cij}</math>with Jacobian <math>\det(K)^{HW}</math>. Here <math>K</math> is any invertible matrix.
* Real NVP, with Jacobian as described in Real NVP.
 
The idea of using the invertible 1x1 convolution is to permute all layers in general, instead of merely permuting the first and second half, as in Real NVP.
 
=== Masked Autoregressive Flow (MAF) ===
An autoregressive model of a distribution on <math>\R^n</math> is defined as the following stochastic process:<ref>{{Cite journal |last1=Papamakarios |first1=George |last2=Pavlakou |first2=Theo |last3=Murray |first3=Iain |date=2017 |title=Masked Autoregressive Flow for Density Estimation |url=https://proceedings.neurips.cc/paper/2017/hash/6c1da886822c67822bcf3679d04369fa-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=30|arxiv=1705.07057 }}</ref>
 
<math display="block">\begin{align}
x_1 \sim& N(\mu_1, \sigma_1^2)\\
x_2 \sim& N(\mu_2(x_1), \sigma_2(x_1)^2)\\
&\cdots \\
x_n \sim& N(\mu_n(x_{1:n-1}), \sigma_n(x_{1:n-1})^2)\\
\end{align}</math>where <math>\mu_i: \R^{i-1} \to \R</math> and <math>\sigma_i: \R^{i-1} \to (0, \infty)</math> are fixed functions that define the autoregressive model.
 
By the [[reparameterization trick]], the autoregressive model is generalized to a normalizing flow:<math display="block">\begin{align}
x_1 =& \mu_1 + \sigma_1 z_1\\
x_2 =& \mu_2(x_1) + \sigma_2(x_1) z_2\\
&\cdots \\
x_n =& \mu_n(x_{1:n-1}) + \sigma_n(x_{1:n-1}) z_n\\
\end{align}</math>The autoregressive model is recovered by setting <math>z \sim N(0, I_{n})</math>.
 
The forward mapping is slow (because it's sequential), but the backward mapping is fast (because it's parallel).
 
The Jacobian matrix is lower-diagonal, so the Jacobian is <math>\sigma_1 \sigma_2(x_1)\cdots \sigma_n(x_{1:n-1})</math>.
 
Reversing the two maps <math>f_\theta</math> and <math>f_\theta^{-1}</math> of MAF results in Inverse Autoregressive Flow (IAF), which has fast forward mapping and slow backward mapping.<ref>{{Cite journal |last1=Kingma |first1=Durk P |last2=Salimans |first2=Tim |last3=Jozefowicz |first3=Rafal |last4=Chen |first4=Xi |last5=Sutskever |first5=Ilya |last6=Welling |first6=Max |date=2016 |title=Improved Variational Inference with Inverse Autoregressive Flow |url=https://proceedings.neurips.cc/paper/2016/hash/ddeebdeefdb7e7e7a697e1c3e3d8ef54-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=29|arxiv=1606.04934 }}</ref>
 
=== Continuous Normalizing Flow (CNF) ===
 
Instead of constructing flow by function composition, another approach is to formulate the flow as a continuous-time dynamic.<ref name="ffjord">{{cite arXiv | eprint=1810.01367| last1=Grathwohl| first1=Will| last2=Chen| first2=Ricky T. Q.| last3=Bettencourt| first3=Jesse| last4=Sutskever| first4=Ilya| last5=Duvenaud| first5=David| title=FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models| year=2018| class=cs.LG}}</ref><ref>{{Cite arXiv |last1=Lipman |first1=Yaron |last2=Chen |first2=Ricky T. Q. |last3=Ben-Hamu |first3=Heli |last4=Nickel |first4=Maximilian |last5=Le |first5=Matt |date=2022-10-01 |title=Flow Matching for Generative Modeling |class=cs.LG | eprint=2210.02747}}</ref> Let <math>z_0</math> be the latent variable with distribution <math>p(z_0)</math>. Map this latent variable to data space with the following flow function:
 
: <math>x = F(z_0) = z_T = z_0 + \int_0^T f(z_t, t) dt</math>
 
where <math>f</math> is an arbitrary function and can be modeled with e.g. neural networks.
 
The inverse function is then naturally:<ref name="ffjord" />
 
: <math>z_0 = F^{-1}(x) = z_T + \int_T^0 f(z_t, t) dt = z_T - \int_0^T f(z_t,t) dt </math>
 
And the log-likelihood of <math>x</math> can be found as:<ref name="ffjord" />
 
: <math>\log(p(x)) = \log(p(z_0)) - \int_0^T \text{Tr}\left[\frac{\partial f}{\partial z_t} \right] dt</math>
 
Since the trace depends only on the diagonal of the Jacobian <math>\partial_{z_t} f</math>, this allows "free-form" Jacobian.<ref>{{cite arXiv |last1=Grathwohl |first1=Will |last2=Chen |first2=Ricky T. Q. |last3=Bettencourt |first3=Jesse |last4=Sutskever |first4=Ilya |last5=Duvenaud |first5=David |date=2018-10-22 |title=FFJORD: Free-form Continuous Dynamics for Scalable Reversible Generative Models |class=cs.LG |eprint=1810.01367 }}</ref> Here, "free-form" means that there is no restriction on the Jacobian's form. It is contrasted with previous discrete models of normalizing flow, where the Jacobian is carefully designed to be only upper- or lower-diagonal, so that the Jacobian can be evaluated efficiently.
 
The trace can be estimated by "Hutchinson's trick":<ref name="Finlay 3154–3164">{{Cite journal |last1=Finlay |first1=Chris |last2=Jacobsen |first2=Joern-Henrik |last3=Nurbekyan |first3=Levon |last4=Oberman |first4=Adam |date=2020-11-21 |title=How to Train Your Neural ODE: the World of Jacobian and Kinetic Regularization |url=https://proceedings.mlr.press/v119/finlay20a.html |journal=International Conference on Machine Learning |language=en |publisher=PMLR |pages=3154–3164|arxiv=2002.02798 }}</ref><ref>{{Cite journal |last=Hutchinson |first=M.F. |date=January 1989 |title=A Stochastic Estimator of the Trace of the Influence Matrix for Laplacian Smoothing Splines |url=http://www.tandfonline.com/doi/abs/10.1080/03610918908812806 |journal=Communications in Statistics - Simulation and Computation |language=en |volume=18 |issue=3 |pages=1059–1076 |doi=10.1080/03610918908812806 |issn=0361-0918|url-access=subscription }}</ref><blockquote>Given any matrix <math>W\in \R^{n\times n}</math>, and any random <math>u\in \R^n</math> with <math>E[uu^T] = I</math>, we have <math>E[u^T W u] = tr(W)</math>. (Proof: expand the expectation directly.)</blockquote>Usually, the random vector is sampled from <math>N(0, I)</math> (normal distribution) or <math>\{\pm n^{-1/2}\}^n</math> ([[Rademacher distribution|Radamacher distribution]]).
 
When <math>f</math> is implemented as a neural network, [[neural ODE]] methods<ref>{{cite conference |last1=Chen |first1=Ricky T. Q. |last2=Rubanova |first2=Yulia |last3=Bettencourt |first3=Jesse |last4=Duvenaud |first4=David K. |year=2018 |editor1-last=Bengio |editor1-first=S. |editor2-last=Wallach |editor2-first=H. |editor3-last=Larochelle |editor3-first=H. |editor4-last=Grauman |editor4-first=K. |editor5-last=Cesa-Bianchi |editor5-first=N. |editor6-last=Garnett |editor6-first=R. |title=Neural Ordinary Differential Equations |url=https://proceedings.neurips.cc/paper_files/paper/2018/file/69386f6bb1dfed68692a24c8686939b9-Paper.pdf |conference= |publisher=Curran Associates, Inc. |volume=31 |arxiv=1806.07366 |book-title=Advances in Neural Information Processing Systems}}</ref> would be needed. Indeed, CNF was first proposed in the same paper that proposed neural ODE.
 
There are two main deficiencies of CNF, one is that a continuous flow must be a [[homeomorphism]], thus preserve orientation and [[ambient isotopy]] (for example, it's impossible to flip a left-hand to a right-hand by continuous deforming of space, and it's impossible to [[Sphere eversion|turn a sphere inside out]], or undo a knot), and the other is that the learned flow <math>f</math> might be ill-behaved, due to degeneracy (that is, there are an infinite number of possible <math>f</math> that all solve the same problem).
 
By adding extra dimensions, the CNF gains enough freedom to reverse orientation and go beyond ambient isotopy (just like how one can pick up a polygon from a desk and flip it around in 3-space, or unknot a knot in 4-space), yielding the "augmented neural ODE".<ref>{{Cite journal |last1=Dupont |first1=Emilien |last2=Doucet |first2=Arnaud |last3=Teh |first3=Yee Whye |date=2019 |title=Augmented Neural ODEs |url=https://proceedings.neurips.cc/paper/2019/hash/21be9a4bd4f81549a9d1d241981cec3c-Abstract.html |journal=Advances in Neural Information Processing Systems |publisher=Curran Associates, Inc. |volume=32}}</ref>
 
Any homeomorphism of <math>\R^n</math> can be approximated by a neural ODE operating on <math>\R^{2n+1}</math>, proved by combining [[Whitney embedding theorem]] for manifolds and the [[universal approximation theorem]] for neural networks.<ref>{{cite arXiv |last1=Zhang |first1=Han |last2=Gao |first2=Xi |last3=Unterman |first3=Jacob |last4=Arodz |first4=Tom |date=2019-07-30 |title=Approximation Capabilities of Neural ODEs and Invertible Residual Networks |class=cs.LG |language=en |eprint=1907.12998}}</ref>
 
To regularize the flow <math>f</math>, one can impose regularization losses. The paper <ref name="Finlay 3154–3164"/> proposed the following regularization loss based on [[Optimal transport|optimal transport theory]]:<math display="block">\lambda_{K} \int_{0}^{T}\left\|f(z_t, t)\right\|^{2} dt
+\lambda_{J} \int_{0}^{T}\left\|\nabla_z f(z_t, t)\right\|_F^{2} dt
</math>where <math>\lambda_K, \lambda_J >0
</math> are hyperparameters. The first term punishes the model for oscillating the flow field over time, and the second term punishes it for oscillating the flow field over space. Both terms together guide the model into a flow that is smooth (not "bumpy") over space and time.
 
== Flows on manifolds ==
When a '''probabilistic flow''' transforms a distribution on an <math>m</math>-dimensional [[smooth manifold]] embedded in <math>\R^n</math>, where <math>m<n</math>, and where the transformation is specified as a function, <math>\R^n\to\R^n</math>, the scaling factor between the source and transformed [[probability density|PDFs]] is ''not'' given by the naive computation of the [[Jacobian matrix and determinant|determinant of the <math>n\text{-by-}n</math> Jacobian]] (which is zero), but instead by the determinant(s) of one or more suitably defined <math>m\text{-by-}m</math> matrices. This section is an interpretation of the tutorial in the appendix of Sorrenson et al.(2023),<ref name='manifold_flow'>{{Cite arXiv
|last1=Sorrenson
|first1=Peter
|last2=Draxler
|first2=Felix
|last3=Rousselot
|first3=Armand
|last4=Hummerich
|first4=Sander
|last5=Köthe
|first5=Ullrich
|title=Learning Distributions on Manifolds with Free-Form Flows
|eprint=2312.09852
|year=2023
|class=cs.LG
}}</ref> where the more general case of non-isometrically embedded [[Riemannian manifold|Riemann manifolds]] is also treated. Here we restrict attention to [[isometry|isometrically]] embedded manifolds.
 
As running examples of manifolds with smooth, isometric embedding in <math>\R^n</math> we shall use:
* The [[n-sphere|unit hypersphere]]: <math>\mathbb S^{n-1}=\{\mathbf x\in\R^n:\mathbf x'\mathbf x=1\}</math>, where flows can be used to generalize e.g. [[Von Mises-Fisher distribution|Von Mises-Fisher]] or uniform spherical distributions.
* The [[simplex]] interior: <math>\Delta^{n-1}=\{\mathbf p=(p_1,\dots,p_n)\in\R^n:p_i>0, \sum_ip_i=1\}</math>, where <math>n</math>-way [[categorical distribution]]s live; and where flows can be used to generalize e.g. [[Dirichlet distribution|Dirichlet]], or uniform simplex distributions.
 
As a first example of a spherical manifold flow transform, consider the [[ACG distribution#ACG via transformation of normal or uniform variates|normalized linear transform]], which radially projects onto the unitsphere the output of an invertible linear transform, parametrized by the <math>n\text{-by-}n</math> invertible matrix <math>\mathbf M</math>:
:<math>
f_\text{lin}(\mathbf x; \mathbf M) = \frac{\mathbf{Mx}}{\lVert\mathbf{Mx}\rVert}
</math>
In full Euclidean space, <math>f_\text{lin}:\R^n\to\R^n</math> is ''not'' invertible, but if we restrict the ___domain and co-___domain to the unitsphere, then <math>f_\text{lin}:\mathbb S^{n-1}\to\mathbb S^{n-1}</math> ''is'' invertible (more specifically it is a [[bijection]] and a [[homeomorphism]] and a [[diffeomorphism]]), with inverse <math>f_\text{lin}(\cdot\,;\mathbf M^{-1})
</math>. The Jacobian of <math>f_\text{lin}:\R^n\to\R^n</math>, at <math>\mathbf y=f_\text{lin}(\mathbf x;\mathbf M)</math> is <math>\lVert\mathbf{Mx}\rVert^{-1}(\mathbf I_n -\mathbf{yy}')\mathbf M</math>, which has rank <math>n-1</math> and determinant of zero; while [[Projected normal distribution#Wider application of the normalized linear transform|as explained here]], the factor (see subsection below) relating source and transformed densities is: <math>\lVert\mathbf{Mx}\rVert^{-n}\left|\operatorname{det}\mathbf M\right|</math>.
 
=== Differential volume ratio ===
For <math>m<n</math>, let <math>\mathcal M\subset\R^n</math> be an <math>m</math>-dimensional manifold with a smooth, isometric embedding into <math>\R^n</math>. Let <math>f:\R^n\to\R^n</math> be a smooth flow transform with range restricted to <math>\mathcal M</math>. Let <math>\mathbf x\in\mathcal M</math> be sampled from a distribution with density <math>P_X</math>. Let <math>\mathbf y=f(\mathbf x)</math>, with resultant (pushforward) density <math>P_Y</math>. Let <math>U\subset\mathcal M</math> be a small, convex region containing <math>\mathbf x</math> and let <math>V=f(U)</math> be its image, which contains <math>\mathbf y</math>; then by conservation of probability mass:
:<math>
P_X(\mathbf x)\operatorname{volume}(U)\approx P_Y(\mathbf y)\operatorname{volume}(V)
</math>
where volume (for very small regions) is given by [[Lebesgue measure]] in <math>m</math>-dimensional [[tangent space]]. By making the regions infinitessimally small, the factor relating the two densities is the ratio of volumes, which we term the '''differential volume ratio'''.
 
To obtain concrete formulas for volume on the <math>m</math>-dimensional manifold, we construct <math>U</math> by mapping an <math>m</math>-dimensional rectangle in (local) coordinate space to the manifold via a smooth embedding function: <math>\R^m\to\R^n</math>. At very small scale, the embedding function becomes essentially linear so that <math>U</math> is a [[Parallelepiped#Parallelotope|parallelotope]] (multidimensional generalization of a parallelogram). Similarly, the flow transform, <math>f</math> becomes linear, so that the image, <math>V=f(U)</math> is also a parallelotope. In <math>\R^m</math>, we can represent an <math>m</math>-dimensional parallelotope with an <math>m\text{-by-}m</math> matrix whose column-vectors are a set of edges (meeting at a common vertex) that span the paralellotope. The [[Determinant#Volume and Jacobian determinant|volume is given by the absolute value of the determinant]] of this matrix. If more generally (as is the case here), an <math>m</math>-dimensional paralellotope is embedded in <math>\R^n</math>, it can be represented with a (tall) <math>n\text{-by-}m</math> matrix, say <math>\mathbf V</math>. Denoting the parallelotope as <math>/\mathbf V\!/</math>, its volume is then given by the square root of the [[Gram determinant]]:
:<math>
\operatorname{volume}/\mathbf V\!/=\sqrt{\left|\operatorname{det}(\mathbf V'\mathbf V)\right|}
</math>
In the sections below, we show various ways to use this volume formula to derive the differential volume ratio.
 
=== Simplex flow===
As a first example, we develop expressions for the differential volume ratio of a simplex flow, <math>\mathbf q=f(\mathbf p)</math>, where <math>\mathbf p, \mathbf q\in\mathcal M=\Delta^{n-1}</math>. Define the '''embedding function''':
:<math>e:\tilde\mathbf p=(p_1\dots,p_{n-1})\mapsto\mathbf p=(p_1\dots,p_{n-1},1-\sum_{i=1}^{n-1}p_i)
</math>
which maps a conveniently chosen, <math>(n-1)</math>-dimensional representation, <math>\tilde\mathbf p</math>, to the embedded manifold. The <math>n\text{-by-}(n-1)</math> Jacobian is
<math>\mathbf E = \begin{bmatrix}
\mathbf{I}_{n-1} \\
-\boldsymbol{1}'
\end{bmatrix}
</math>.
To define <math>U</math>, the differential volume element at the transformation input (<math>\mathbf p\in\Delta^{n-1}</math>), we start with a rectangle in <math>\tilde\mathbf p</math>-space, having (signed) differential side-lengths, <math>dp_1, \dots, dp_{n-1}</math> from which we form the square diagonal matrix <math>\mathbf D</math>, the columns of which span the rectangle. At very small scale, we get <math>U=e(\mathbf D)=/\mathbf{ED}\!/</math>, with:
[[File:Simplex measure pullback.svg|frame|right|For the 1-simplex (blue) embedded in <math>\R^2</math>, when we pull back [[Lebesgue measure]] from [[tangent space]] (parallel to the simplex), via the embedding <math>p_1\mapsto(p_1,1-p_1)</math>, with Jacobian <math>\mathbf E=\begin{bmatrix}1&-1\end{bmatrix}'</math>, a scaling factor of <math>\sqrt{\mathbf E'\mathbf E}=\sqrt2</math> results.]]
:<math>\operatorname{volume}(U) = \sqrt{\left|\operatorname{det}(\mathbf{DE}'\mathbf{ED})\right|}
= \sqrt{\left|\operatorname{det}(\mathbf E'\mathbf E)\right|}\,
\left|\operatorname{det}\mathbf D)\right|
=\sqrt n\prod_{i=1}^{n-1} \left|dp_i\right|
</math>
To understand the geometric interpretation of the factor <math>\sqrt{n}</math>, see the example for the 1-simplex in the diagram at right.
 
The differential volume element at the transformation output (<math>\mathbf q\in\Delta^{n-1}</math>), is the parallelotope, <math>V=f(U)=/\mathbf{F_pED}\!/</math>, where <math>\mathbf{F_p}</math> is the <math>n\text{-by-}n</math> Jacobian of <math>f</math> at <math>\mathbf p=e(\tilde\mathbf p)</math>. Its volume is:
:<math>
\operatorname{volume}(V) =
\sqrt{\left|\operatorname{det}(\mathbf{DE}'\mathbf{ F_p}'\mathbf{F_pED})\right|}
= \sqrt{\left|\operatorname{det}(\mathbf E'\mathbf{F_p}'\mathbf{F_pE})\right|}\,
\left|\operatorname{det}\mathbf D)\right|
</math>
so that the factor <math>\left|\operatorname{det}\mathbf D)\right|</math> cancels in the volume ratio, which can now already be numerically evaluated. It can however be rewritten in a sometimes more convenient form by also introducing the '''representation function''', <math>r:\mathbf p\mapsto\tilde\mathbf p</math>, which simply extracts the first <math>(n-1)</math> components. The Jacobian is <math>\mathbf R=\begin{bmatrix}\mathbf I_n&\boldsymbol0\end{bmatrix}</math>. Observe that, since <math>e\circ r\circ f=f</math>, the [[Chain rule#General rule: Vector-valued functions with multiple inputs|chain rule for function composition]] gives: <math>\mathbf{ERF_p}=\mathbf{F_p}</math>. By plugging this expansion into the above Gram determinant and then refactoring it as a product of determinants of square matrices, we can extract the factor <math>\sqrt{\left|\operatorname{det}(\mathbf E'\mathbf E)\right|}=\sqrt n</math>, which now also cancels in the ratio, which finally simpifies to the determinant of the Jacobian of the "sandwiched" flow transformation, <math>r\circ f\circ e</math>:
:<math>
R^\Delta_f(\mathbf p)=\frac{\operatorname{volume}(V)}{\operatorname{volume}(U)}
= \left|\operatorname{det}(\mathbf{RF_pE})\right|
</math>
which, if <math>\mathbf p\sim P_\mathbf P</math>, can be used to derive the pushforward density after a change of variables, <math>\mathbf q = f(\mathbf p)</math>:
:<math>
P_{\mathbf Q}(\mathbf q) = \frac{P_{\mathbf P}(\mathbf p)}{R^\Delta_f(\mathbf p)}\,,\;\text{where}\;\;
\mathbf p=f^{-1}(\mathbf q)
</math>
This formula is valid only because the simplex is flat and the Jacobian, <math>\mathbf E</math> is constant. The more general case for curved manifolds is discussed below, after we present two concrete examples of simplex flow transforms.
 
====Simplex calibration transform====
A [[Dirichlet distribution#Generalization by scaling and translation of log-probabilities|calibration transform]], <math>f_\text{cal}:\Delta^{n-1}\to\Delta^{n-1}</math>, which is sometimes used in machine learning for post-processing of the (class posterior) outputs of a probabilistic <math>n</math>-class classifier,<ref>{{Cite conference|last1=Brümmer
|first1=Niko
|last2=van Leeuwen
|first2=D. A.
|title=On calibration of language recognition scores
|book-title=Proceedings of IEEE Odyssey: The Speaker and Language Recognition Workshop
|year=2006
|___location=San Juan, Puerto Rico
|pages=1–8
|doi=10.1109/ODYSSEY.2006.248106}}
</ref><ref>{{Cite arXiv
|last1=Ferrer
|first1=Luciana
|last2=Ramos
|first2=Daniel
|title=Evaluating Posterior Probabilities: Decision Theory, Proper Scoring Rules, and Calibration
|eprint=2408.02841
|year=2024
|class=stat.ML
}}</ref> uses the [[softmax function]] to renormalize categorical distributions after scaling and translation of the input distributions in log-probability space. For <math>\mathbf p, \mathbf q\in\Delta^{n-1}</math> and with parameters, <math>a\ne0</math> and <math>\mathbf c\in\R^n</math> the transform can be specified as:
:<math>
\mathbf q=f_\text{cal}(\mathbf p; a, \mathbf c) = \operatorname{softmax}(a^{-1}\log\mathbf p+\mathbf c)\;\iff\;
\mathbf p=f^{-1}_\text{cal}(\mathbf q; a, \mathbf c) = \operatorname{softmax}(a\log\mathbf q-a\mathbf c)
</math>
where the log is applied elementwise. After some algebra the '''differential volume ratio''' can be expressed as:
:<math>
R^\Delta_\text{cal}(\mathbf p; a, \mathbf c) = \left|\operatorname{det}(\mathbf{RF_pE})\right| = \left|a\right|^{1-n}\prod_{i=1}^n\frac{q_i}{p_i}
</math>
* This result can also be obtained by factoring the density of the [[SGB distribution]],<ref name="sgb">{{cite web |last1=Graf |first1=Monique (2019)|title=The Simplicial Generalized Beta distribution - R-package SGB and applications |url=https://libra.unine.ch/server/api/core/bitstreams/dd593778-b1fd-4856-855b-7b21e005ee77/content |website=Libra |access-date=26 May 2025}}</ref> which is obtained by sending [[Dirichlet distribution|Dirichlet]] variates through <math>f_\text{cal}</math>.
While calibration transforms are most often trained as [[discriminative model]]s, the reinterpretation here as a probabilistic flow allows also the design of [[generative model|generative]] calibration models based on this transform. When used for calibration, the restriction <math>a>0</math> can be imposed to prevent direction reversal in log-probability space. With the additional restriction <math>\mathbf c=\boldsymbol0</math>, this transform (with discriminative training) is known in machine learning as [[Platt scaling#Analysis|temperature scaling]].
 
====Generalized calibration transform====
The above calibration transform can be generalized to <math>f_\text{gcal}:\Delta^{n-1}\to\Delta^{n-1}</math>, with parameters <math>\mathbf c\in\R^n</math> and <math>\mathbf A</math> <math>n\text{-by-}n</math> invertible:<ref>{{Cite thesis
|last1=Brümmer
|first1=Niko
|title=Measuring, refining and calibrating speaker and language information extracted from speech
|type=PhD thesis
|institution=Department of Electrical & Electronic Engineering, University of Stellenbosch
|___location=Stellenbosch, South Africa
|date=18 October 2010
|url=https://scholar.sun.ac.za/items/1b46805b-2b1e-46aa-83ce-75ede92f0159
}}</ref>
:<math>
\mathbf q = f_\text{gcal}(\mathbf p;\mathbf A,\mathbf c)
= \operatorname{softmax}(\mathbf A\log\mathbf p + \mathbf c)\,,\;\text{subject to}\;
\mathbf{A1}=\lambda\mathbf1
</math>
where the condition that <math>\mathbf A</math> has <math>\mathbf1</math> as an [[eigenvector]] ensures invertibility by sidestepping the information loss due to the invariance: <math>\operatorname{softmax}(\mathbf x+\alpha\mathbf1)=\operatorname{softmax}(\mathbf x)</math>. Note in particular that <math>\mathbf A=\lambda\mathbf I_n</math> is the ''only'' allowed diagonal parametrization, in which case we recover <math>f_\text{cal}(\mathbf p;\lambda^{-1},\mathbf c)</math>, while (for <math>n>2</math>) generalization ''is'' possible with non-diagonal matrices. The '''inverse''' is:
:<math>
\mathbf p = f_\text{gcal}^{-1}(\mathbf q;\mathbf A, \mathbf c)
= f_\text{gcal}(\mathbf q;\mathbf A^{-1}, -\mathbf A^{-1}\mathbf c)\,,\;\text{where}\;
\mathbf{A1}=\lambda\mathbf1\Longrightarrow\mathbf{A}^{-1}\mathbf1=\lambda^{-1}\mathbf1
</math>
The '''differential volume ratio''' is:
:<math>
R^\Delta_\text{gcal}(\mathbf p;\mathbf A,\mathbf c)
=\frac{\left|\operatorname{det}(\mathbf A)\right|}{|\lambda|}\prod_{i=1}^n\frac{q_i}{p_i}
</math>
If <math>f_\text{gcal}</math> is to be used as a calibration transform, further constraint could be imposed, for example that <math>\mathbf A</math> be [[positive definite matrix|positive definite]], so that <math>(\mathbf{Ax})'\mathbf x>0</math>, which avoids direction reversals. (This is one possible generalization of <math>a>0</math> in the <math>f_\text{cal}</math> parameter.)
 
For <math>n=2</math>, <math>a>0</math> and <math>\mathbf A</math> positive definite, then <math>f_\text{cal}</math> and <math>f_\text{gcal}</math> are equivalent in the sense that in both cases, <math>\log\frac{p_1}{p_2}\mapsto\log\frac{q_1}{q_2}</math> is a straight line, the (positive) slope and offset of which are functions of the transform parameters. For <math>n>2,</math> <math>f_\text{gcal}</math> ''does'' generalize <math>f_\text{cal}</math>.
 
It must however be noted that chaining multiple <math>f_\text{gcal}</math> flow transformations does ''not'' give a further generalization, because:
:<math>
f_\text{gcal}(\cdot\,;\mathbf A_1,\mathbf c_1) \circ
f_\text{gcal}(\cdot\,;\mathbf A_2,\mathbf c_2)
= f_\text{gcal}(\cdot\,;\mathbf A_1\mathbf A_2,\mathbf c_1+\mathbf A_1\mathbf c_2)
</math>
In fact, the set of <math>f_\text{gcal}</math> transformations form a [[group mathematics|group]] under function composition. The set of <math>f_\text{cal}</math> transformations form a subgroup.
 
Also see: '''Dirichlet calibration''',<ref>{{cite arXiv
| title = Beyond temperature scaling: Obtaining well-calibrated multiclass probabilities with Dirichlet calibration
| author = Meelis Kull, Miquel Perelló‑Nieto, Markus Kängsepp, Telmo Silva Filho, Hao Song, Peter A. Flach
| eprint = 1910.12656
| date = 28 October 2019
| class = cs.LG
}}</ref> which generalizes <math>f_\text{gcal}</math>, by not placing any restriction on the matrix, <math>\mathbf A</math>, so that invertibility is not guaranteed. While Dirichlet calibration is trained as a discriminative model, <math>f_\text{gcal}</math> can also be trained as part of a generative calibration model.
 
===Differential volume ratio for curved manifolds===
Consider a flow, <math>\mathbf y=f(\mathbf x)</math> on a curved manifold, for example <math>\mathbb S^{n-1}</math> which we equip with the embedding function, <math>e</math> that maps a set of <math>(n-1)</math> [[N-sphere#Spherical coordinates|angular spherical coordinates]] to <math>\mathbb S^{n-1}</math>. The Jacobian of <math>e</math> is non-constant and we have to evaluate it at both input (<math>\mathbf {E_x}</math>) and output (<math>\mathbf {E_y}</math>). The same applies to <math>r</math>, the representation function that recovers spherical coordinates from points on <math>\mathbb S^{n-1}</math>, for which we need the Jacobian at the output (<math>\mathbf{R_y}</math>). The differential volume ratio now generalizes to:
:<math>
R_f(\mathbf x) = \left|\operatorname{det}(\mathbf{R_yF_xE_x})\right|\,\frac{\sqrt{\left|\operatorname{det}(\mathbf E_\mathbf y'\mathbf{E_y})\right|}}{\sqrt{\left|\operatorname{det}(\mathbf E_\mathbf x'\mathbf{E_x})\right|}}
</math>
For geometric insight, consider <math>\mathbf S^2</math>, where the spherical coordinates are co-latitude, <math>\theta\in[0,\pi]</math> and longitude, <math>\phi\in[0,2\pi)</math>. At <math>\mathbf x = e(\theta,\phi)</math>, we get <math>\sqrt{\left|\operatorname{det}(\mathbf E_\mathbf x'\mathbf{E_x})\right|}=\sin\theta</math>, which gives the radius of the circle at that latitude (compare e.g. polar circle to equator). The differential volume (surface area on the sphere) is: <math>\sin\theta\,d\theta\,d\phi</math>.
 
The above derivation for <math>R_f</math> is fragile in the sense that when using ''fixed'' functions <math>e,r</math>, there may be places where they are not well-defined, for example at the poles of the 2-sphere where longitude is arbitrary. This problem is sidestepped (using standard manifold machinery) by generalizing to ''local'' coordinates (charts), where in the vicinities of <math>\mathbf x,\mathbf y\in\mathcal M</math>, we map from local <math>m</math>-dimensional coordinates to <math>\R^n</math> and back using the respective function pairs <math>e_{\mathbf x}, r_{\mathbf x}</math> and <math>e_{\mathbf y}, r_{\mathbf y}</math>. We continue to use the same notation for the Jacobians of these functions (<math>\mathbf{E_x}, \mathbf{E_y}, \mathbf{R_y}</math>), so that the above formula for <math>R_f</math> remains valid.
 
We ''can'' however, choose our local coordinate system in a way that simplifies the expression for <math>R_f</math> and indeed also its practical implementation.<ref name=manifold_flow/> Let <math>\pi:\mathcal P\to\R^n</math> be a smooth idempotent projection (<math>\pi\circ\pi=\pi</math>) from the ''projectible set'', <math>\mathcal P\subseteq\R^n</math>, onto the embedded manifold. For example:
* The positive orthant of <math>\R^n</math> is projected onto the '''simplex''' as: <math>\pi(\mathbf z)=\bigl(\sum_{i=1}^n z_i\bigr)^{-1}\mathbf z</math>
* Non-zero vectors in <math>\R^n</math> are projected onto the '''unitsphere''' as: <math>\pi(\mathbf z)=\bigl(\sum_{i=1}^n z^2_i\bigr)^{-\frac12}\mathbf z</math>
For every <math>\mathbf x\in\mathcal M</math>, we require of <math>\pi</math> that its <math>n\text{-by-}n</math> Jacobian, <math>\boldsymbol{\Pi_x}</math> has rank <math>m</math> (the manifold dimension), in which case <math>\boldsymbol{\Pi_x}</math> is an [[projection (linear algebra)|idempotent linear projection]] onto the local tangent space (''orthogonal'' for the unitsphere: <math>\mathbf I_n-\mathbf{xx}'</math>; ''oblique'' for the simplex: <math>\mathbf I_n-\boldsymbol{x1}'</math>). The columns of <math>\boldsymbol{\Pi_x}</math> span the <math>m</math>-dimensional tangent space at <math>\mathbf x</math>. We use the notation, <math>\mathbf{T_x}</math> for any <math>n\text{-by-}m</math> matrix with orthonormal columns (<math>\mathbf T_{\mathbf x}'\mathbf{T_x}=\mathbf I_m</math>) that span the local tangent space. Also note: <math>\boldsymbol{\Pi_x}\mathbf{T_x}=\mathbf{T_x}</math>. We can now choose our local coordinate embedding function, <math>e_\mathbf x:\R^m\to\R^n</math>:
:<math>
e_\mathbf x(\tilde x) = \pi(\mathbf x + \mathbf{T_x\tilde x})\,,
\text{with Jacobian:}\,\mathbf{E_x}=\mathbf{T_x}\,\text{at}\,\tilde\mathbf x=\mathbf0.
</math>
Since the Jacobian is injective (full rank: <math>m</math>), a local (not necessarily unique) [[left inverse function|left inverse]], say <math>r^*_\mathbf x</math> with Jacobian <math>\mathbf R^*_\mathbf x</math>, exists such that <math>r^*_\mathbf x(e_\mathbf x(\tilde x))=\tilde x</math> and <math>\mathbf R^*_\mathbf x\mathbf{T_x}=\mathbf I_m</math>. In practice we do not need the left inverse function itself, but we ''do'' need its Jacobian, for which the above equation does not give a unique solution. We can however enforce a unique solution for the Jacobian by choosing the left inverse as, <math>r_\mathbf x:\R^n\to\R^m</math>:
:<math>
r_\mathbf x(\mathbf z) = r^*_\mathbf x(\pi(\mathbf z))\,,\text{with Jacobian:}\,
\mathbf{R_x}=\mathbf T_\mathbf x'
</math>
We can now finally plug <math>\mathbf{E_x}=\mathbf{T_x}</math> and <math>\mathbf{R_y}=\mathbf T_\mathbf y'</math> into our previous expression for <math>R_f</math>, the '''differential volume ratio''', which because of the orthonormal Jacobians, simplifies to:<ref>The tangent matrices are not unique: if <math>\mathbf T </math> has orthonormal columns and <math>\mathbf Q</math> is an [[orthogonal matrix]], then <math>\mathbf{TQ}</math> also has orthonormal columns that span the same subspace; it is easy to verify that <math>\left|\operatorname{det}(\mathbf{T_y}'\mathbf{F_xT_x})\right|</math> is invariant to such transformations of the tangent representatives.</ref>
:<math>
R_f(\mathbf x) = \left|\operatorname{det}(\mathbf{T_y}'\mathbf{F_xT_x})\right|
</math>
 
====Practical implementation====
For learning the parameters of a manifold flow transformation, we need access to the differential volume ratio, <math>R_f</math>, or at least to its gradient w.r.t. the parameters. Moreover, for some inference tasks, we need access to <math>R_f</math> itself. Practical solutions include:
*Sorrenson et al.(2023)<ref name=manifold_flow/> give a solution for computationally efficient stochastic parameter gradient approximation for <math>\log R_f.</math>
*For some hand-designed flow transforms, <math>R_f</math> can be analytically derived in closed form, for example the above-mentioned simplex calibration transforms. Further examples are given below in the section on simple spherical flows.
*On a software platform equipped with [[linear algebra]] and [[automatic differentiation]], <math>R_f(\mathbf x) = \left|\operatorname{det}(\mathbf{T_y}'\mathbf{F_xT_x})\right|</math> can be automatically evaluated, given access to only <math>\mathbf x, f, \pi</math>.<ref>With [[PyTorch]]:
<pre>
from torch.linalg import qr
from torch.func import jacrev
def logRf(pi, m, f, x):
y = f(x)
Fx, PI = jacrev(f)(x), jacrev(pi)
Tx, Ty = [qr(PI(z)).Q[:,:m] for z in (x,y)]
return (Ty.T @ Fx @ Tx).slogdet().logabsdet
</pre></ref> But this is expensive for high-dimensional data, with at least <math>\mathcal O(n^3)</math> computational costs. Even then, the slow automatic solution can be invaluable as a tool for numerically verifying hand-designed closed-form solutions.
 
=== Simple spherical flows ===
In machine learning literature, various complex spherical flows formed by deep neural network architectures may be found.<ref name=manifold_flow/> In contrast, this section compiles from ''statistics'' literature the details of three very simple spherical flow transforms, with simple closed-form expressions for inverses and differential volume ratios. These flows can be used individually, or chained, to generalize distributions on the unitsphere, <math>\mathbb S^{n-1}</math>. All three flows are compositions of an invertible affine transform in <math>\R^n</math>, followed by radial projection back onto the sphere. The flavours we consider for the affine transform are: pure translation, pure linear and general affine. To make these flows fully functional for learning, inference and sampling, the tasks are:
* To derive the inverse transform, with suitable restrictions on the parameters to ensure invertibility.
* To derive in simple closed form the '''differential volume ratio''', <math>R_f</math>.
An interesting property of these simple spherical flows is that they don't make use of any non-linearities apart from the radial projection. Even the simplest of them, the normalized translation flow, can be chained to form perhaps surprisingly flexible distributions.
 
==== Normalized translation flow ====
The normalized translation flow, <math>f_\text{trans}:\mathbb S^{n-1}\to\mathbb S^{n-1}</math>, with parameter <math>\mathbf c\in\R^n</math>, is given by:
:<math>
\mathbf y = f_\text{trans}(\mathbf x;\mathbf c)
=\frac{\mathbf x + \mathbf c}{\lVert\mathbf x + \mathbf c\rVert}\,,
\;\text{where}\;\lVert\mathbf c\rVert < 1
</math>
The inverse function may be derived by considering, for <math>\ell>0</math>: <math>\mathbf y=\ell^{-1}(\mathbf x+\mathbf c)</math> and then using <math>\mathbf x'\mathbf x=1</math> to get a [[quadratic equation]] to recover <math>\ell</math>, which gives:
:<math>
\mathbf x = f^{-1}_\text{trans}(\mathbf y;\mathbf c) = \ell\mathbf y - \mathbf c\,,\text{where}\;
\ell = \mathbf y'\mathbf c +\sqrt{(\mathbf y'\mathbf c)^2+1-\mathbf c'\mathbf c}
</math>
from which we see that we need <math>\lVert\mathbf c\rVert < 1</math> to keep <math>\ell</math> real and positive for all <math>\mathbf y\in\mathbb S^{n-1}</math>. The '''differential volume ratio''' is given (without derivation) by Boulerice & Ducharme(1994) as:<ref name=BDflow>
{{cite journal
|last1=Boulerice
|first1=Bernard
|last2=Ducharme
|first2=Gilles R.
|title=Decentered Directional Data
|journal=Annals of the Institute of Statistical Mathematics
|volume=46
|issue=3
|pages=573–586
|year=1994
|doi=10.1007/BF00773518
}}
</ref>
:<math>
R_\text{trans}(\mathbf x;\mathbf c) = \frac{1+\mathbf x'\mathbf c}{\lVert\mathbf x +\mathbf c\rVert^n}
</math>
This can indeed be verified analytically:
*By a laborious manipulation of <math>R_f(\mathbf x) = \left|\operatorname{det}(\mathbf{T_y}'\mathbf{F_xT_x})\right|</math>.
*By setting <math>\mathbf M=\mathbf I_n</math> in <math>R_\text{aff}(\mathbf x;\mathbf M, \mathbf c)</math>, which is given below.
Finally, it is worth noting that <math>f_\text{trans}</math> and <math>f^{-1}_\text{trans}</math> do not have the same functional form.
 
==== Normalized linear flow ====
The normalized linear flow, <math>f_\text{lin}:\mathbb S^{n-1}\to\mathbb S^{n-1}</math>, where parameter <math>\mathbf M</math> is an invertible <math>n\text{-by-}n</math> matrix, is given by:
:<math>
\mathbf y = f_\text{lin}(\mathbf x;\mathbf M)
=\frac{\mathbf{Mx}}{\lVert\mathbf{Mx}\rVert}
\;\iff\;
\mathbf x = f^{-1}_\text{lin}(\mathbf y;\mathbf M)
= f_\text{lin}(\mathbf y;\mathbf M^{-1})
=\frac{\mathbf{M^{-1}y}}{\lVert\mathbf{M^{-1}y}\rVert}
</math>
The '''differential volume ratio''' is:
:<math>
R_\text{lin}(\mathbf x; \mathbf M) =
\frac{\left|\operatorname{det}\mathbf M\right|}
{\lVert\mathbf{Mx}\rVert^n}
</math>
This result can be derived indirectly via the '''Angular central Gaussian distribution (ACG)''',<ref>
{{cite journal|title=Statistical analysis for the angular central Gaussian distribution on the sphere|last1=Tyler|first1=David E|journal=Biometrika|volume=74|number=3|pages=579–589|year=1987|doi=10.2307/2336697|jstor=2336697 }}
</ref> which can be obtained via normalized linear transform of either Gaussian, or uniform spherical variates. The first relationship can be used to derive the ACG density by a marginalization integral over the radius; after which the second relationship can be used to factor out the differential volume ratio. For details, see [[ACG distribution]].
 
==== Normalized affine flow ====
The normalized affine flow, <math>f_\text{aff}:\mathbb S^{n-1}\to\mathbb S^{n-1}</math>, with parameters <math>\mathbf c\in\R^n</math> and <math>\mathbf M</math>, <math>n\text{-by-}n</math> invertible, is given by:
:<math>
f_\text{aff}(\mathbf x;\mathbf M, \mathbf c)
=\frac{\mathbf{Mx} + \mathbf c}{\lVert\mathbf{Mx} + \mathbf c\rVert}\,,
\;\text{where}\;\lVert\mathbf{M^{-1}c}\rVert < 1
</math>
The inverse function, derived in a similar way to the normalized translation inverse is:
:<math>
\mathbf x = f^{-1}_\text{aff}(\mathbf y;\mathbf M,\mathbf c) = \mathbf M^{-1}(\ell\mathbf y - \mathbf c)\,,\text{where}\;
\ell = \frac{\mathbf y'\mathbf{Wc} +\sqrt{(\mathbf y'\mathbf{Wc})^2+\mathbf y'\mathbf{Wy}(1-\mathbf c'\mathbf{Wc})}}{\mathbf y'\mathbf{Wy}}
</math>
where <math>\mathbf W=(\mathbf{MM}')^{-1}</math>. The '''differential volume ratio''' is:
:<math>
R_\text{aff}(\mathbf x; \mathbf M, \mathbf c)
=R_\text{lin}(\mathbf x; \mathbf M+\mathbf c\mathbf x') =
\frac{\left|\operatorname{det}\mathbf M\right|(1+\mathbf x'\mathbf{M^{-1}c})}
{\lVert\mathbf{Mx+c}\rVert^n}
</math>
The final RHS numerator was expanded from <math>\operatorname{det}(\mathbf M + \mathbf{cx}')</math> by the [[matrix determinant lemma]]. Recalling <math>R_f(\mathbf x)=\left|\operatorname{det}(\mathbf T_\mathbf y'\mathbf{F_xT_x})\right|</math>, the equality between <math>R_\text{aff}</math> and <math>R_\text{lin}</math> holds because not only:
:<math>\mathbf x'\mathbf x=1\;\Longrightarrow\;\mathbf y = f_\text{aff}(\mathbf x; \mathbf{M,c})=f_\text{lin}(\mathbf x; \mathbf{M+cx}')
</math>
but also, by orthogonality of <math>\mathbf x</math> to the local tangent space:
:<math>
\mathbf x'\mathbf{T_x}=\boldsymbol0\;\Longrightarrow\;\mathbf F_\mathbf x^\text{aff}\mathbf{T_x} = \mathbf F_\mathbf x^\text{lin}\mathbf{T_x}
</math>
where <math>\mathbf F_\mathbf x^\text{lin}=\lVert\mathbf{Mx}+\mathbf c\rVert^{-1}(\mathbf I_n-\mathbf{yy}')(\mathbf{M+cx}')</math> is the Jacobian of <math>f_\text{lin}</math> differentiated w.r.t. its input, but ''not'' also w.r.t. to its parameter.
 
== Downsides ==
Despite normalizing flows success in estimating high-dimensional densities, some downsides still exist in their designs. First of all, their latent space where input data is projected onto is not a lower-dimensional space and therefore, flow-based models do not allow for compression of data by default and require a lot of computation. However, it is still possible to perform image compression with them.<ref name="Lossy Image Compression with Normal">{{cite arXiv | eprint=2008.10486| last1=Helminger| first1=Leonhard| last2=Djelouah| first2=Abdelaziz| last3=Gross| first3=Markus| last4=Schroers| first4=Christopher| title=Lossy Image Compression with Normalizing Flows| year=2020| class=cs.CV}}</ref>
 
Flow-based models are also notorious for failing in estimating the likelihood of out-of-distribution samples (i.e.: samples that were not drawn from the same distribution as the training set).<ref>{{cite arXiv | eprint=1810.09136v3| last1=Nalisnick| first1=Eric| last2=Matsukawa| first2=Teh| last3=Zhao| first3=Yee Whye| last4=Song| first4=Zhao| title=Do Deep Generative Models Know What They Don't Know?| year=2018| class=stat.ML}}</ref> Some hypotheses were formulated to explain this phenomenon, among which the [[Typical Set|typical set]] hypothesis,<ref>{{cite arXiv | eprint=1906.02994| last1=Nalisnick| first1=Eric| last2=Matsukawa| first2=Teh| last3=Zhao| first3=Yee Whye| last4=Song| first4=Zhao| title=Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality| year=2019| class=stat.ML}}</ref> estimation issues when training models,<ref>{{Cite journal |last1=Zhang |first1=Lily |last2=Goldstein |first2=Mark |last3=Ranganath |first3=Rajesh |date=2021 |title=Understanding Failures in Out-of-Distribution Detection with Deep Generative Models |journal=Proceedings of Machine Learning Research |url=https://proceedings.mlr.press/v139/zhang21g.html |volume=139 |pages=12427–12436|pmid=35860036 |pmc=9295254 }}</ref> or fundamental issues due to the entropy of the data distributions.<ref>{{Cite arXiv|last1=Caterini |first1=Anthony L. |last2=Loaiza-Ganem |first2=Gabriel |date=2022 |title=Entropic Issues in Likelihood-Based OOD Detection |pages=21–26|class=stat.ML |eprint=2109.10794 }}</ref>
 
One of the most interesting properties of normalizing flows is the [[Inverse function|invertibility]] of their learned [[Bijection|bijective]] map. This property is given by constraints in the design of the models (cf.: RealNVP, Glow) which guarantee theoretical invertibility. The integrity of the inverse is important in order to ensure the applicability of the [[Probability density function#Function of random variables and change of variables in the probability density function|change-of-variable theorem]], the computation of the [[Jacobian matrix and determinant|Jacobian]] of the map as well as sampling with the model. However, in practice this invertibility is violated and the inverse map explodes because of numerical imprecision.<ref>{{cite arXiv | eprint=2006.09347| last1=Behrmann| first1=Jens| last2=Vicol| first2=Paul| last3=Wang| first3=Kuan-Chieh| last4=Grosse| first4=Roger| last5=Jacobsen| first5=Jörn-Henrik| title=Understanding and Mitigating Exploding Inverses in Invertible Neural Networks| year=2020| class=cs.LG}}</ref>
 
== Applications ==
 
Flow-based generative models have been applied on a variety of modeling tasks, including:
* Audio generation<ref>{{cite arXiv | eprint=1912.01219| last1=Ping| first1=Wei| last2=Peng| first2=Kainan| last3=Gorur| first3=Dilan| last4=Lakshminarayanan| first4=Balaji| title=WaveFlow: A Compact Flow-based Model for Raw Audio| year=2019| class=cs.SD}}</ref>
* Image generation<ref name="glow" />
* Molecular graph generation<ref>{{cite arXiv | eprint=2001.09382| last1=Shi| first1=Chence| last2=Xu| first2=Minkai| last3=Zhu| first3=Zhaocheng| last4=Zhang| first4=Weinan| last5=Zhang| first5=Ming| last6=Tang| first6=Jian| title=GraphAF: A Flow-based Autoregressive Model for Molecular Graph Generation| year=2020| class=cs.LG}}</ref>
* Point-cloud modeling<ref>{{cite arXiv | eprint=1906.12320| last1=Yang| first1=Guandao| last2=Huang| first2=Xun| last3=Hao| first3=Zekun| last4=Liu| first4=Ming-Yu| last5=Belongie| first5=Serge| last6=Hariharan| first6=Bharath| title=PointFlow: 3D Point Cloud Generation with Continuous Normalizing Flows| year=2019| class=cs.CV}}</ref>
* Video generation<ref>{{cite arXiv | eprint=1903.01434| last1=Kumar| first1=Manoj| last2=Babaeizadeh| first2=Mohammad| last3=Erhan| first3=Dumitru| last4=Finn| first4=Chelsea| last5=Levine| first5=Sergey| last6=Dinh| first6=Laurent| last7=Kingma| first7=Durk| title=VideoFlow: A Conditional Flow-Based Model for Stochastic Video Generation| year=2019| class=cs.CV}}</ref>
* Lossy image compression<ref name="Lossy Image Compression with Normal"/>
* Anomaly detection<ref>{{cite arXiv | eprint=2008.12577| last1=Rudolph| first1=Marco| last2=Wandt| first2=Bastian| last3=Rosenhahn| first3=Bodo| title=Same Same But DifferNet: Semi-Supervised Defect Detection with Normalizing Flows| year=2021| class=cs.CV}}</ref>
 
== References ==
<!-- Inline citations added to your article will automatically display here. See en.wikipedia.org/wiki/WP:REFB for instructions on how to add citations. -->
{{reflist}}
 
== External links ==
* [https://lilianweng.github.io/lil-log/2018/10/13/flow-based-deep-generative-models.html Flow-based Deep Generative Models]
* [https://deepgenerativemodels.github.io/notes/flow/ Normalizing flow models]
 
[[Category:Machine learning]]
[[Category:Statistical models]]
[[Category:Probabilistic models]]