Content deleted Content added
Fix section title |
Citation bot (talk | contribs) Alter: template type. Add: s2cid, pages. Removed parameters. Formatted dashes. | Use this bot. Report bugs. | Suggested by AManWithNoPlan | #UCB_webform 443/563 |
||
Line 74:
=== Proofs ===
By the generalized Pythagorean theorem of [[Bregman divergence]], of which KL-divergence is a special case, it can be shown that:<ref name=Tran2018>{{cite
[[File:Bregman_divergence_Pythagorean.png|right|300px|thumb|Generalized Pythagorean theorem for [[Bregman divergence]]
.<ref name="Martin2014">{{cite journal |last1=Adamčík |first1=Martin |title=The Information Geometry of Bregman Divergences and Some Applications in Multi-Expert Reasoning |journal=Entropy |date=2014 |volume=16 |issue=12 |pages=6338–6381|bibcode=2014Entrp..16.6338A |doi=10.3390/e16126338 |doi-access=free }}</ref>]]
Line 112:
:<math>q_j^{*}(\mathbf{Z}_j\mid \mathbf{X}) = \frac{e^{\operatorname{E}_{i \neq j} [\ln p(\mathbf{Z}, \mathbf{X})]}}{\int e^{\operatorname{E}_{i \neq j} [\ln p(\mathbf{Z}, \mathbf{X})]}\, d\mathbf{Z}_j}</math>
where <math>\operatorname{E}_{i \neq j} [\ln p(\mathbf{Z}, \mathbf{X})]</math> is the [[expected value|expectation]] of the logarithm of the [[joint probability]] of the data and latent variables, taken over all variables not in the partition: refer to <ref>{{Cite journal |last=Lee|first=Se Yoon| title = Gibbs sampler and coordinate ascent variational inference: A set-theoretical review|journal=Communications in Statistics - Theory and Methods|year=2021|pages=1–21|doi=10.1080/03610926.2021.1921214|arxiv=2008.01006|s2cid=220935477}}</ref> for a derivation of the distribution <math>q_j^{*}(\mathbf{Z}_j\mid \mathbf{X})</math>.
In practice, we usually work in terms of logarithms, i.e.:
Line 120:
The constant in the above expression is related to the [[normalizing constant]] (the denominator in the expression above for <math>q_j^{*}</math>) and is usually reinstated by inspection, as the rest of the expression can usually be recognized as being a known type of distribution (e.g. [[Gaussian distribution|Gaussian]], [[gamma distribution|gamma]], etc.).
Using the properties of expectations, the expression <math>\operatorname{E}_{i \neq j} [\ln p(\mathbf{Z}, \mathbf{X})]</math> can usually be simplified into a function of the fixed [[hyperparameter]]s of the [[prior distribution]]s over the latent variables and of expectations (and sometimes higher [[moment (mathematics)|moment]]s such as the [[variance]]) of latent variables not in the current partition (i.e. latent variables not included in <math>\mathbf{Z}_j</math>). This creates [[circular dependency|circular dependencies]] between the parameters of the distributions over variables in one partition and the expectations of variables in the other partitions. This naturally suggests an [[iterative]] algorithm, much like EM (the [[expectation-maximization]] algorithm), in which the expectations (and possibly higher moments) of the latent variables are initialized in some fashion (perhaps randomly), and then the parameters of each distribution are computed in turn using the current values of the expectations, after which the expectation of the newly computed distribution is set appropriately according to the computed parameters. An algorithm of this sort is guaranteed to [[limit of a sequence|converge]].<ref>{{cite book|title=Convex Optimization|first1=Stephen P.|last1=Boyd|first2=Lieven|last2=Vandenberghe|year=2004|publisher=Cambridge University Press|isbn=978-0-521-83378-3|url=https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.
In other words, for each of the partitions of variables, by simplifying the expression for the distribution over the partition's variables and examining the distribution's functional dependency on the variables in question, the family of the distribution can usually be determined (which in turn determines the value of the constant). The formula for the distribution's parameters will be expressed in terms of the prior distributions' hyperparameters (which are known constants), but also in terms of expectations of functions of variables in other partitions. Usually these expectations can be simplified into functions of expectations of the variables themselves (i.e. the [[mean]]s); sometimes expectations of squared variables (which can be related to the [[variance]] of the variables), or expectations of higher powers (i.e. higher [[moment (mathematics)|moment]]s) also appear. In most cases, the other variables' distributions will be from known families, and the formulas for the relevant expectations can be looked up. However, those formulas depend on those distributions' parameters, which depend in turn on the expectations about other variables. The result is that the formulas for the parameters of each variable's distributions can be expressed as a series of equations with mutual, [[nonlinear]] dependencies among the variables. Usually, it is not possible to solve this system of equations directly. However, as described above, the dependencies suggest a simple iterative algorithm, which in most cases is guaranteed to converge. An example will make this process clearer.
==A duality formula for variational inference==
[[File:CAVI algorithm explain.jpg|600px|thumb|right|Pictorial illustration of coordinate ascent variational inference algorithm by the duality formula <ref>{{Cite journal |last=Lee|first=Se Yoon| title = Gibbs sampler and coordinate ascent variational inference: A set-theoretical review|journal=Communications in Statistics - Theory and Methods|year=2021|pages=1–21|doi=10.1080/03610926.2021.1921214|arxiv=2008.01006|s2cid=220935477}}</ref>.]]
The following theorem is referred to as a duality formula for variational inference.<ref>{{Cite journal |last=Lee|first=Se Yoon| title = Gibbs sampler and coordinate ascent variational inference: A set-theoretical review|journal=Communications in Statistics - Theory and Methods|year=2021|pages=1–21|doi=10.1080/03610926.2021.1921214|arxiv=2008.01006|s2cid=220935477}}</ref> It explains some important properties of the variational distributions used in variational Bayes methods.
{{EquationRef|3|Theorem}} Consider two [[probability spaces]] <math>(\Theta,\mathcal{F},P)</math> and <math>(\Theta,\mathcal{F},Q)</math> with <math>Q \ll P</math>. Assume that there is a common dominating [[probability measure]] <math>\lambda</math> such that <math>P \ll \lambda</math> and <math>Q \ll \lambda</math>. Let <math>h</math> denote any real-valued [[random variable]] on <math>(\Theta,\mathcal{F},P)</math> that satisfies <math>h \in L_1(P)</math>. Then the following equality holds
|