Content deleted Content added
Disambiguated: potential function → Vector potential |
No edit summary |
||
Line 31:
A common, useful modification to the partition function is to introduce auxiliary functions. This allows, for example, the partition function to be used as a [[generating function]] for [[correlation function]]s. This is discussed in greater detail below.
==The parameter
The role or meaning of the parameter <math>\beta</math> can be understood in a variety of different ways. In classical thermodynamics, it is an [[inverse temperature]]. More generally, one would say that it is the variable that is [[Conjugate variables (thermodynamics)|conjugate]] to some (arbitrary) function <math>H</math> of the random variables <math>X</math>. The word ''conjugate'' here is used in the sense of conjugate [[generalized coordinates]] in [[Lagrangian mechanics]], thus, properly <math>\beta</math> is a [[Lagrange multiplier]]. It is not uncommonly called the [[generalized force]]. All of these concepts have in common the idea that one value is meant to be kept fixed, as others, interconnected in some complicated way, are allowed to vary. In the current case, the value to be kept fixed is the [[expectation value]] of <math>H</math>, even as many different [[probability distribution]]s can give rise to exactly this same (fixed) value.
Line 59:
Although the value of <math>\beta</math> is commonly taken to be real, it need not be, in general; this is discussed in the section [[#Normalization|Normalization]] below. The values of <math>\beta</math> can be understood to be the coordinates of points in a space; this space is in fact a [[manifold]], as sketched below. The study of these spaces as manifolds constitutes the field of [[information geometry]].
==
The potential function itself commonly takes the form of a sum:
Line 71:
==As a measure==
The value of the expression
:<math>\exp \left(-\beta H(x_1,x_2,\dots) \right)</math>
Line 96:
Given the definition of the probability measure above, the expectation value of any function ''f'' of the random variables ''X'' may now be written as expected: so, for discrete-valued ''X'', one writes
:<math>\begin{align}
\langle f\rangle
& = \sum_{x_i} f(x_1,x_2,\dots) P(x_1,x_2,\dots) \\
& = \frac{1}{Z(\beta)} \sum_{x_i} f(x_1,x_2,\dots) \exp \left(-\beta H(x_1,x_2,\dots) \right)
Line 115:
The Gibbs measure is the unique statistical distribution that maximizes the entropy for a fixed expectation value of the energy; this underlies its use in [[maximum entropy method]]s.
==
The points <math>\beta</math> can be understood to form a space, and specifically, a [[manifold]]. Thus, it is reasonable to ask about the structure of this manifold; this is the task of [[information geometry]].
Line 121:
:<math>g_{ij}(\beta) = \frac{\partial^2}{\partial \beta^i\partial \beta^j} \left(-\log Z(\beta)\right) =
\langle \left(H_i-\langle H_i\rangle\right)\left( H_j-\langle H_j\rangle\right)\rangle</math>
This matrix is positive semi-definite, and may be interpreted as a [[metric tensor]], specifically, a [[Riemannian metric]]. Equipping the space of lagrange multipliers with a metric in this way turns it into a [[Riemannian manifold]].<ref>{{cite journal |first=Gavin E. |last=Crooks
That the above defines the Fisher information metric can be readily seen by explicitly substituting for the expectation value:
:<math>\begin{align} g_{ij}(\beta)
& = \langle \left(H_i-\langle H_i\rangle\right)\left( H_j-\langle H_j\rangle\right)\rangle \\
& = \sum_{x} P(x) \left(H_i-\langle H_i\rangle\right)\left( H_j-\langle H_j\rangle\right) \\
Line 141:
Curiously, the [[Fisher information metric]] can also be understood as the flat-space [[Euclidean metric]], after appropriate change of variables, as described in the main article on it. When the <math>\beta</math> are complex-valued, the resulting metric is the [[Fubini–Study metric]]. When written in terms of [[mixed state (physics)|mixed states]], instead of [[pure state]]s, it is known as the [[Bures metric]].
==
By introducing artificial auxiliary functions <math>J_k</math> into the partition function, it can then be used to obtain the expectation value of the random variables. Thus, for example, by writing
:<math>\begin{align} Z(\beta,J)
& = Z(\beta,J_1,J_2,\dots) \\
& = \sum_{x_i} \exp \left(-\beta H(x_1,x_2,\dots) +
Line 152:
</math>
one then has
:<math>\bold{E}[x_k] = \langle x_k \rangle = \left.
\frac{\partial}{\partial J_k}
Line 181:
* [[Partition function (statistical mechanics)]]
==
{{reflist}}
|