Content deleted Content added
Entropeneur (talk | contribs) →Simplex flow: Improved some explanations. |
Citation bot (talk | contribs) Added bibcode. Removed URL that duplicated identifier. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 918/1032 |
||
(48 intermediate revisions by 9 users not shown) | |||
Line 10:
== Method ==
[[File:Normalizing-flow.svg|thumb|Scheme for normalizing flows]]
Let <math>z_0</math> be a (possibly multivariate) [[random variable]] with distribution <math>p_0(z_0)</math>.
For <math>i = 1, ..., K</math>, let <math>z_i = f_i(z_{i-1})</math> be a sequence of random variables transformed from <math>z_0</math>. The functions <math>f_1, ..., f_K</math> should be invertible, i.e. the [[inverse function]] <math>f^{-1}_i</math> exists. The final output <math>z_K</math> models the target distribution.
The log likelihood of <math>z_K</math> is (see [[#Derivation of log likelihood|derivation]]):
: <math>\log p_K(z_K) = \log p_0(z_0) - \sum_{i=1}^{K} \log \left|\det \frac{df_i(z_{i-1})}{dz_{i-1}}\right|</math>
Learning probability distributions by differentiating log Jacobians originated in the Infomax (maximum likelihood) approach to ICA,<ref>Bell, A. J.; Sejnowski, T. J. (1995). "[https://doi.org/10.1162/neco.1995.7.6.1129 An information-maximization approach to blind separation and blind deconvolution]". ''Neural Computation''. **7** (6): 1129–1159. doi:10.1162/neco.1995.7.6.1129.</ref> which forms a single-layer (K=1) flow-based model. Relatedly, the single layer precursor of conditional generative flows appeared in <ref>Roth, Z.; Baram, Y. (1996). "[https://doi.org/10.1109/72.536322 Multidimensional density shaping by sigmoids]". ''IEEE Transactions on Neural Networks''. **7** (5): 1291–1298. doi:10.1109/72.536322.</ref>.
To efficiently compute the log likelihood, the functions <math>f_1, ..., f_K</math> should be easily invertible, and the determinants of their Jacobians should be simple to compute. In practice, the functions <math>f_1, ..., f_K</math> are modeled using [[Deep learning|deep neural networks]], and are trained to minimize the negative log-likelihood of data samples from the target distribution. These architectures are usually designed such that only the forward pass of the neural network is required in both the inverse and the Jacobian determinant calculations. Examples of such architectures include NICE,<ref name=":1">{{cite arXiv | eprint=1410.8516| last1=Dinh| first1=Laurent| last2=Krueger| first2=David| last3=Bengio| first3=Yoshua| title=NICE: Non-linear Independent Components Estimation| year=2014| class=cs.LG}}</ref> RealNVP,<ref name=":2">{{cite arXiv | eprint=1605.08803| last1=Dinh| first1=Laurent| last2=Sohl-Dickstein| first2=Jascha| last3=Bengio| first3=Samy| title=Density estimation using Real NVP| year=2016| class=cs.LG}}</ref> and Glow.<ref name="glow">{{cite arXiv | eprint=1807.03039| last1=Kingma| first1=Diederik P.| last2=Dhariwal| first2=Prafulla| title=Glow: Generative Flow with Invertible 1x1 Convolutions| year=2018| class=stat.ML}}</ref>
Line 65 ⟶ 69:
In other words, minimizing the [[Kullback–Leibler divergence]] between the model's likelihood and the target distribution is equivalent to [[Maximum likelihood estimation|maximizing the model likelihood]] under observed samples of the target distribution.<ref>{{Cite journal |last1=Papamakarios |first1=George |last2=Nalisnick |first2=Eric |last3=Rezende |first3=Danilo Jimenez |last4=Shakir |first4=Mohamed |last5=Balaji |first5=Lakshminarayanan |date=March 2021 |title=Normalizing Flows for Probabilistic Modeling and Inference |journal=Journal of Machine Learning Research |url=https://jmlr.org/papers/v22/19-1028.html |volume=22 |issue=57 |pages=1–64 |arxiv=1912.02762}}</ref>
A pseudocode for training normalizing flows is as follows:<ref>{{Cite journal |last1=Kobyzev |first1=Ivan |last2=Prince |first2=Simon J.D. |last3=Brubaker |first3=Marcus A. |date=November 2021 |title=Normalizing Flows: An Introduction and Review of Current Methods
* INPUT. dataset <math>x_{1:n}</math>, normalizing flow model <math>f_\theta(\cdot), p_0 </math>.
Line 185 ⟶ 189:
|eprint=2312.09852
|year=2023
|class=cs.LG
}}</ref> where the more general case of non-isometrically embedded [[Riemannian manifold|Riemann manifolds]] is also treated. Here we restrict attention to [[isometry|isometrically]] embedded manifolds.
As running examples of manifolds with smooth, isometric embedding in <math>\R^n</math> we shall use:
* The [[n-sphere|unit hypersphere]]: <math>\mathbb S^{n-1}=\{\mathbf x\in\R^n:\mathbf x'\mathbf x=1\}</math>, where flows can be used to generalize e.g. [[Von Mises-Fisher distribution|Von Mises-Fisher]] or uniform spherical distributions.
* The [[simplex]] interior: <math>\Delta^{n-1}=\{\mathbf p=(p_1,\dots,p_n)\in\R^n:p_i>0, \sum_ip_i=1\}</math>, where <math>n</math>-way [[categorical distribution
As a first example of a spherical manifold flow transform, consider the [[ACG distribution#ACG via transformation of normal or uniform variates|normalized linear transform]], which radially projects onto the unitsphere the output of an invertible linear transform, parametrized by the <math>n\text{-by-}n</math> invertible matrix <math>\mathbf M</math>:
Line 196 ⟶ 201:
</math>
In full Euclidean space, <math>f_\text{lin}:\R^n\to\R^n</math> is ''not'' invertible, but if we restrict the ___domain and co-___domain to the unitsphere, then <math>f_\text{lin}:\mathbb S^{n-1}\to\mathbb S^{n-1}</math> ''is'' invertible (more specifically it is a [[bijection]] and a [[homeomorphism]] and a [[diffeomorphism]]), with inverse <math>f_\text{lin}(\cdot\,;\mathbf M^{-1})
</math>. The Jacobian of <math>f_\text{lin}:\R^n\to\R^n</math>, at <math>\mathbf y=f_\text{lin}(\mathbf x;\mathbf M)</math> is <math>\lVert\mathbf{Mx}\rVert^{-1}(\mathbf I_n -\mathbf{yy}')\mathbf M</math>, which has rank <math>n-1</math> and determinant of zero; while [[
=== Differential volume ratio ===
Line 203 ⟶ 208:
P_X(\mathbf x)\operatorname{volume}(U)\approx P_Y(\mathbf y)\operatorname{volume}(V)
</math>
where volume (for very small regions) is given by [[Lebesgue measure]] in <math>m</math>-dimensional [[tangent space]]. By making the regions infinitessimally small, the factor relating the two densities is the ratio of volumes, which we term the '''differential volume ratio'''.
To obtain concrete formulas for volume on the <math>m</math>-dimensional manifold, we
:<math>
\operatorname{volume}/\mathbf V\!/=\sqrt{\left|\operatorname{det}(\mathbf V'\mathbf V)\right|}
Line 215 ⟶ 220:
:<math>e:\tilde\mathbf p=(p_1\dots,p_{n-1})\mapsto\mathbf p=(p_1\dots,p_{n-1},1-\sum_{i=1}^{n-1}p_i)
</math>
which maps
<math>\mathbf E = \begin{bmatrix}
\mathbf{I}_{n-1} \\
Line 221 ⟶ 226:
\end{bmatrix}
</math>.
To define <math>U</math>, the differential volume element at the transformation input
[[File:Simplex measure pullback.svg|frame|right|For the 1-simplex (blue) embedded in <math>\R^2</math>, when we pull back [[Lebesgue measure]] from [[tangent space]] (
:<math>\operatorname{volume}(U) = \sqrt{\left|\operatorname{det}(\mathbf{DE}'\mathbf{ED})\right|}
= \sqrt{\left|\operatorname{det}(\mathbf E'\mathbf E)\right|}\,
Line 228 ⟶ 233:
=\sqrt n\prod_{i=1}^{n-1} \left|dp_i\right|
</math>
To understand the geometric interpretation of the factor <math>\sqrt{n}</math>, see the example for the 1-simplex in the diagram at right.
:<math>
\operatorname{volume}(V) =
Line 237 ⟶ 242:
\left|\operatorname{det}\mathbf D)\right|
</math>
so that the factor <math>\left|\operatorname{det}\mathbf D)\right|</math> cancels in the volume ratio, which can now already be numerically evaluated. It can however be rewritten in a sometimes more convenient form by also introducing the '''representation function''', <math>r:\mathbf p\mapsto\tilde\mathbf p</math>, which simply extracts the first <math>(n-1)</math> components. The Jacobian is <math>\mathbf R=\begin{bmatrix}\mathbf I_n&\boldsymbol0\end{bmatrix}</math>. Observe that, since <math>e\circ r\circ f=f</math>, the [[
:<math>
R^\Delta_f(\mathbf p)=\frac{\operatorname{volume}(V)}{\operatorname{volume}(U)}
Line 247 ⟶ 252:
\mathbf p=f^{-1}(\mathbf q)
</math>
This formula is valid only because the simplex is flat and the Jacobian, <math>\mathbf E</math> is constant. The more general case for curved manifolds is discussed below, after we present
====Simplex calibration transform====
A
|first1=Niko
|last2=van Leeuwen
|first2=D. A.
|title=On calibration of language recognition scores
|book-title=Proceedings of IEEE Odyssey: The Speaker and Language Recognition Workshop
|year=2006
|___location=San Juan, Puerto Rico
|pages=1–8
|doi=10.1109/ODYSSEY.2006.248106}}
</ref><ref>{{Cite arXiv
|last1=Ferrer
|first1=Luciana
Line 258 ⟶ 273:
|eprint=2408.02841
|year=2024
|class=stat.ML
}}</ref> uses the [[softmax function]] to renormalize categorical distributions after scaling and translation of the input distributions in log-probability space. For <math>\mathbf p, \mathbf q\in\Delta^{n-1}</math> and with parameters, <math>a
:<math>
\mathbf q=f_\text{cal}(\mathbf p; a, \mathbf c) = \operatorname{softmax}(a^{-1}\log\mathbf p+\mathbf c)\;\iff\;
\mathbf p=f^{-1}_\text{cal}(\mathbf q; a, \mathbf c) = \operatorname{softmax}(a\log\mathbf q-a\mathbf c)
</math>
where the log is applied elementwise. After some algebra the '''differential volume ratio''' can be expressed as:
:<math>
R^\Delta_\text{cal}(\mathbf p; a, \mathbf c) = \left|\operatorname{det}(\mathbf{RF_pE})\right| = \left|a\right|^{1-n}\prod_{i=1}^n\frac{q_i}{p_i}
</math>
* This result can also be obtained by factoring the density of the [[SGB distribution]],<ref name="sgb">{{cite web |last1=Graf |first1=Monique (2019)|title=The Simplicial Generalized Beta distribution - R-package SGB and applications |url=https://libra.unine.ch/server/api/core/bitstreams/dd593778-b1fd-4856-855b-7b21e005ee77/content |website=Libra |access-date=26 May 2025}}</ref> which is obtained by sending [[Dirichlet distribution|Dirichlet]] variates through <math>f_\text{cal}</math>.
While calibration transforms are most often trained as [[discriminative model]]s, the reinterpretation here as a probabilistic flow allows also the design of [[generative model|generative]] calibration models based on this transform. When used for calibration, the restriction <math>a>0</math> can be imposed to prevent direction reversal in log-probability space. With the additional restriction <math>\mathbf c=\boldsymbol0</math>, this transform (with discriminative training) is known in machine learning as [[Platt scaling#Analysis|temperature scaling]].
====Generalized calibration transform====
The above calibration transform can be generalized to <math>f_\text{gcal}:\Delta^{n-1}\to\Delta^{n-1}</math>, with parameters <math>\mathbf c\in\R^n</math> and <math>\mathbf A</math> <math>n\text{-by-}n</math> invertible:<ref>{{Cite thesis
|last1=Brümmer
|first1=Niko
|title=Measuring, refining and calibrating speaker and language information extracted from speech
|type=PhD thesis
|institution=Department of Electrical & Electronic Engineering, University of Stellenbosch
|___location=Stellenbosch, South Africa
|date=18 October 2010
|url=https://scholar.sun.ac.za/items/1b46805b-2b1e-46aa-83ce-75ede92f0159
}}</ref>
:<math>
\mathbf q = f_\text{gcal}(\mathbf p;\mathbf A,\mathbf c)
= \operatorname{softmax}(\mathbf A\log\mathbf p + \mathbf c)\,,\;\text{subject to}\;
\mathbf{A1}=\lambda\mathbf1
</math>
where the condition that <math>\mathbf A</math> has <math>\mathbf1</math> as an [[eigenvector]] ensures invertibility by sidestepping the information loss due to the invariance: <math>\operatorname{softmax}(\mathbf x+\alpha\mathbf1)=\operatorname{softmax}(\mathbf x)</math>. Note in particular that <math>\mathbf A=\lambda\mathbf I_n</math> is the ''only'' allowed diagonal parametrization, in which case we recover <math>f_\text{cal}(\mathbf p;\lambda^{-1},\mathbf c)</math>, while (for <math>n>2</math>) generalization ''is'' possible with non-diagonal matrices. The '''inverse''' is:
:<math>
\mathbf p = f_\text{gcal}^{-1}(\mathbf q;\mathbf A, \mathbf c)
= f_\text{gcal}(\mathbf q;\mathbf A^{-1}, -\mathbf A^{-1}\mathbf c)\,,\;\text{where}\;
\mathbf{A1}=\lambda\mathbf1\Longrightarrow\mathbf{A}^{-1}\mathbf1=\lambda^{-1}\mathbf1
</math>
The '''differential volume ratio''' is:
:<math>
R^\Delta_\text{gcal}(\mathbf p;\mathbf A,\mathbf c)
=\frac{\left|\operatorname{det}(\mathbf A)\right|}{|\lambda|}\prod_{i=1}^n\frac{q_i}{p_i}
</math>
If <math>f_\text{gcal}</math> is to be used as a calibration transform, further constraint could be imposed, for example that <math>\mathbf A</math> be [[positive definite matrix|positive definite]], so that <math>(\mathbf{Ax})'\mathbf x>0</math>, which avoids direction reversals. (This is one possible generalization of <math>a>0</math> in the <math>f_\text{cal}</math> parameter.)
For <math>n=2</math>, <math>a>0</math> and <math>\mathbf A</math> positive definite, then <math>f_\text{cal}</math> and <math>f_\text{gcal}</math> are equivalent in the sense that in both cases, <math>\log\frac{p_1}{p_2}\mapsto\log\frac{q_1}{q_2}</math> is a straight line, the (positive) slope and offset of which are functions of the transform parameters. For <math>n>2,</math> <math>f_\text{gcal}</math> ''does'' generalize <math>f_\text{cal}</math>.
It must however be noted that chaining multiple <math>f_\text{gcal}</math> flow transformations does ''not'' give a further generalization, because:
:<math>
f_\text{gcal}(\cdot\,;\mathbf A_1,\mathbf c_1) \circ
f_\text{gcal}(\cdot\,;\mathbf A_2,\mathbf c_2)
= f_\text{gcal}(\cdot\,;\mathbf A_1\mathbf A_2,\mathbf c_1+\mathbf A_1\mathbf c_2)
</math>
In fact, the set of <math>f_\text{gcal}</math> transformations form a [[group mathematics|group]] under function composition. The set of <math>f_\text{cal}</math> transformations form a subgroup.
Also see: '''Dirichlet calibration''',<ref>{{cite arXiv
| title = Beyond temperature scaling: Obtaining well-calibrated multiclass probabilities with Dirichlet calibration
| author = Meelis Kull, Miquel Perelló‑Nieto, Markus Kängsepp, Telmo Silva Filho, Hao Song, Peter A. Flach
| eprint = 1910.12656
| date = 28 October 2019
| class = cs.LG
}}</ref> which generalizes <math>f_\text{gcal}</math>, by not placing any restriction on the matrix, <math>\mathbf A</math>, so that invertibility is not guaranteed. While Dirichlet calibration is trained as a discriminative model, <math>f_\text{gcal}</math> can also be trained as part of a generative calibration model.
===Differential volume ratio for curved manifolds===
Consider a flow, <math>\mathbf y=f(\mathbf x)</math> on a curved manifold, for example <math>\mathbb S^{n-1}</math> which we equip with the embedding function, <math>e</math> that maps a set of <math>(n-1)</math> [[N-sphere#
:<math>
R_f(\mathbf x) = \left|\operatorname{det}(\mathbf{R_yF_xE_x})\right|\,\frac{\sqrt{\left|\operatorname{det}(\mathbf E_\mathbf y'\mathbf{E_y})\right|}}{\sqrt{\left|\operatorname{det}(\mathbf E_\mathbf x'\mathbf{E_x})\right|}}
Line 276 ⟶ 340:
For geometric insight, consider <math>\mathbf S^2</math>, where the spherical coordinates are co-latitude, <math>\theta\in[0,\pi]</math> and longitude, <math>\phi\in[0,2\pi)</math>. At <math>\mathbf x = e(\theta,\phi)</math>, we get <math>\sqrt{\left|\operatorname{det}(\mathbf E_\mathbf x'\mathbf{E_x})\right|}=\sin\theta</math>, which gives the radius of the circle at that latitude (compare e.g. polar circle to equator). The differential volume (surface area on the sphere) is: <math>\sin\theta\,d\theta\,d\phi</math>.
The above derivation for <math>R_f</math> is fragile in the sense that when using ''fixed'' functions <math>e,r</math>, there may be places where they are not well-defined, for example at the poles of the 2-sphere where longitude is arbitrary. This problem is sidestepped (using standard manifold machinery) by generalizing to ''local'' coordinates (charts), where in the vicinities of <math>\mathbf x,\mathbf y\in\mathcal M</math>, we map from local <math>m</math>-dimensional coordinates to <math>\R^n</math> and back using the respective function pairs <math>e_{\mathbf x}, r_{\mathbf x}</math> and <math>e_{\mathbf y}, r_{\mathbf y}</math>. We continue to use the same notation for the Jacobians of these functions (<math>\mathbf{E_x}, \mathbf{E_y}, \mathbf{R_y}</math>), so that the above formula for <math>R_f</math> remains valid.
We ''can'' however, choose our local coordinate system in a way that simplifies the expression for <math>R_f</math> and indeed also its practical implementation.<ref name=manifold_flow/> Let <math>\pi:\mathcal P\to\R^n</math> be a smooth idempotent projection (<math>\pi\circ\pi=\pi</math>) from the ''projectible set'', <math>\mathcal P\subseteq\R^n</math>, onto the embedded manifold. For example:
*
*
For every <math>\mathbf x\in\mathcal M</math>, we require of <math>\pi</math> that its <math>n\text{-by-}n</math> Jacobian, <math>\boldsymbol{\Pi_x}</math> has rank <math>m</math> (the manifold dimension), in which case <math>\boldsymbol{\Pi_x}</math> is an [[projection (linear algebra)|idempotent
:<math>
e_\mathbf x(\tilde x) = \pi(\mathbf x + \mathbf{T_x\tilde x})\,,
\text{with Jacobian:}\,\mathbf{E_x}=\mathbf{T_x}\,\text{at}\,\tilde\mathbf x=\mathbf0.
</math>
:<math>
r_\mathbf x(\mathbf z) = r^*_\mathbf x(\pi(\mathbf z))\,,\text{with Jacobian:}\,
Line 295 ⟶ 359:
R_f(\mathbf x) = \left|\operatorname{det}(\mathbf{T_y}'\mathbf{F_xT_x})\right|
</math>
====Practical implementation====
For learning the parameters of a manifold flow transformation, we need access to the differential volume ratio, <math>R_f</math>, or at least to its gradient w.r.t. the parameters. Moreover, for some inference tasks, we need access to <math>R_f</math> itself. Practical solutions include:
*Sorrenson et al.(2023)<ref name=manifold_flow/> give a solution for computationally efficient stochastic parameter gradient approximation for <math>\log R_f.</math>
*For some hand-designed flow transforms, <math>R_f</math> can be analytically derived in
*On a software platform equipped with [[linear algebra]] and [[automatic differentiation]], <math>R_f(\mathbf x) = \left|\operatorname{det}(\mathbf{T_y}'\mathbf{F_xT_x})\right|</math> can be automatically evaluated, given access to only <math>\mathbf x, f, \pi</math>.<ref>With [[PyTorch]]:
<pre>
Line 314 ⟶ 379:
* To derive the inverse transform, with suitable restrictions on the parameters to ensure invertibility.
* To derive in simple closed form the '''differential volume ratio''', <math>R_f</math>.
An interesting property of these simple spherical flows is that they don't make use of any non-linearities apart from the radial projection. Even the simplest of them, the normalized translation flow, can be chained to form perhaps
==== Normalized translation flow ====
Line 328 ⟶ 393:
\ell = \mathbf y'\mathbf c +\sqrt{(\mathbf y'\mathbf c)^2+1-\mathbf c'\mathbf c}
</math>
from which we see that we need <math>\lVert\mathbf c\rVert < 1</math> to keep <math>\ell</math> real and positive for all <math>\mathbf y\in\mathbb S^{n-1}</math>. The '''differential volume ratio''' is given (without derivation) by Boulerice & Ducharme(1994) as:<ref name=BDflow>
{{cite journal
|last1=Boulerice
Line 340 ⟶ 405:
|pages=573–586
|year=1994
|doi=10.1007/BF00773518
}}
</ref>
:<math>
Line 367 ⟶ 433:
</math>
This result can be derived indirectly via the '''Angular central Gaussian distribution (ACG)''',<ref>
{{cite journal|title=Statistical analysis for the angular central Gaussian distribution on the sphere|last1=Tyler|first1=David E|journal=Biometrika|volume=74|number=3|pages=579–589|year=1987|doi=10.2307/2336697|jstor=2336697 }}
</ref> which can be obtained via normalized linear transform of either Gaussian, or uniform spherical variates. The first relationship can be used to derive the ACG density by a marginalization integral over the radius; after which the second relationship can be used to factor out the differential volume ratio. For details, see [[ACG distribution]].
|