Normalizing constant: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Removed parameters. | Use this bot. Report bugs. | #UCB_CommandLine
No edit summary
 
(3 intermediate revisions by 3 users not shown)
Line 11:
==Definition==
 
In [[probability theory]], a '''normalizing constant''' is a constant by which an everywhere non-negative function must be multiplied so the area under its graph is 1, e.g., to make it a [[probability density function]] or a [[probability mass function]].<ref>''[http://www.math.uah.edu/stat/dist/Continuous.xhtml Continuous Distributions''] at Department of Mathematical Sciences: University of Alabama. in Huntsville</ref><ref>{{harvnb|Feller, |1968, |p. =22.}}</ref>
 
==Examples==
 
If we start from the simple [[Gaussian function]]
<math display="block">p(x) = e^{-x^2/2}, \quad x\in(-\infty,\infty) </math>
we have the corresponding [[Gaussian integral]]
<math display="block">\int_{-\infty}^\infty p(x) \, dx = \int_{-\infty}^\infty e^{-x^2/2} \, dx = \sqrt{2\pi\,},</math>
Line 24:
so that its [[integral of a Gaussian function|integral]] is unit
<math display="block">\int_{-\infty}^\infty \varphi(x) \, dx = \int_{-\infty}^\infty \frac{1}{\sqrt{2\pi\,}} e^{-x^2/2} \, dx = 1 </math>
then the function <math> \varphi(x) </math> is a probability density function.<ref>{{harvnb|Feller, |1968, |p. =174.}}</ref> This is the density of the standard [[normal distribution]]. (''Standard'', in this case, means the [[expected value]] is 0 and the [[variance]] is 1.)
 
And constant <math display="inline"> \frac{1}{\sqrt{2\pi}} </math> is the '''normalizing constant''' of function <math>p(x)</math>.
Line 32:
and consequently
<math display="block">f(n) = \frac{\lambda^n e^{-\lambda}}{n!} </math>
is a probability mass function on the set of all nonnegative integers.<ref>{{harvnb|Feller, |1968, |p. =156.}}</ref> This is the probability mass function of the [[Poisson distribution]] with expected value λ.
 
Note that if the probability density function is a function of various parameters, so too will be its normalizing constant. The parametrised normalizing constant for the [[Boltzmann distribution]] plays a central role in [[statistical mechanics]]. In that context, the normalizing constant is called the [[partition function (statistical mechanics)|partition function]].
Line 38:
==Bayes' theorem==
[[Bayes' theorem]] says that the posterior probability measure is proportional to the product of the prior probability measure and the [[likelihood function]]. ''Proportional to'' implies that one must multiply or divide by a normalizing constant to assign measure 1 to the whole space, i.e., to get a probability measure. In a simple discrete case we have
:<math display="block">P(H_0|D) = \frac{P(D|H_0)P(H_0)}{P(D)}</math>
 
:<math>P(H_0|D) = \frac{P(D|H_0)P(H_0)}{P(D)}</math>
 
where P(H<sub>0</sub>) is the prior probability that the hypothesis is true; P(D|H<sub>0</sub>) is the [[conditional probability]] of the data given that the hypothesis is true, but given that the data are known it is the [[likelihood function|likelihood]] of the hypothesis (or its parameters) given the data; P(H<sub>0</sub>|D) is the posterior probability that the hypothesis is true given the data. P(D) should be the probability of producing the data, but on its own is difficult to calculate, so an alternative way to describe this relationship is as one of proportionality:
:<math display="block">P(H_0|D) \propto P(D|H_0)P(H_0).</math>
 
:<math>P(H_0|D) \propto P(D|H_0)P(H_0).</math>
 
Since P(H|D) is a probability, the sum over all possible (mutually exclusive) hypotheses should be 1, leading to the conclusion that
:<math display="block">P(H_0|D) = \frac{P(D|H_0)P(H_0)}{\displaystyle\sum_i P(D|H_i)P(H_i)} .</math>
 
:<math>P(H_0|D) = \frac{P(D|H_0)P(H_0)}{\displaystyle\sum_i P(D|H_i)P(H_i)} .</math>
 
In this case, the [[Multiplicative inverse|reciprocal]] of the value
:<math display="block">P(D) = \sum_i P(D|H_i)P(H_i) \;</math>
is the ''normalizing constant''.<ref>{{harvnb|Feller, |1968, |p. =124.}}</ref> It can be extended from countably many hypotheses to uncountably many by replacing the sum by an integral.
 
For concreteness, there are many methods of estimating the normalizing constant for practical purposes. Methods include the bridge sampling technique, the naive Monte Carlo estimator, the generalized harmonic mean estimator, and importance sampling.<ref>{{Cite web |last = Gronau | first = Quentin | date = 2020 | title = bridgesampling: An R Package for Estimating Normalizing Constants | url = https://cran.r-project.org/web/packages/bridgesampling/vignettes/bridgesampling_paper.pdf | access-date = September 11, 2021 | website = The Comprehensive R Archive Network}}</ref>
:<math>P(D)=\sum_i P(D|H_i)P(H_i) \;</math>
 
is the ''normalizing constant''.<ref>Feller, 1968, p. 124.</ref> It can be extended from countably many hypotheses to uncountably many by replacing the sum by an integral.
 
For concreteness, there are many methods of estimating the normalizing constant for practical purposes. Methods include the bridge sampling technique, the naive Monte Carlo estimator, the generalized harmonic mean estimator, and importance sampling.<ref>{{Cite web|last=Gronau|first=Quentin|date=2020|title=bridgesampling: An R Package for Estimating Normalizing Constants|url=https://cran.r-project.org/web/packages/bridgesampling/vignettes/bridgesampling_paper.pdf|access-date=September 11, 2021|website=The Comprehensive R Archive Network}}</ref>
 
==Non-probabilistic uses==
Line 68 ⟶ 60:
*[[Normalization (statistics)]]
 
==NotesReferences==
{{reflist}}
{{refbegin}}
 
*{{cite book | last = Feller | first = William |authorlink author-link = William Feller | title = An Introduction to Probability Theory and its Applications (volume I) | publisher = John Wiley & Sons | date = 1968 | isbn = 0-471-25708-7}}
==References==
{{refend}}
*[http://www.math.uah.edu/stat/dist/Continuous.xhtml Continuous Distributions] at Department of Mathematical Sciences: University of Alabama in Huntsville
*{{cite book|last = Feller|first = William|authorlink = William Feller|title = An Introduction to Probability Theory and its Applications (volume I)|publisher = John Wiley & Sons|date = 1968|isbn = 0-471-25708-7}}
 
[[Category:Theory of probability distributions]]