Metropolis–Hastings algorithm: Difference between revisions

Content deleted Content added
move images around. the 1D flowchart most clearly and succinctly illustrates the algorithm in my opinion, so it should go first. the one that was previously at the top is unclear; I can't figure out what proposal method is being used to generate these dashed curves and I doubt most readers will be able to either.
Citation bot (talk | contribs)
Added article-number. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Abductive | Category:Monte Carlo methods | #UCB_Category 35/65
 
(11 intermediate revisions by 7 users not shown)
Line 2:
[[File:Flowchart-of-Metropolis-Hastings-M-H-algorithm-for-the-parameter-estimation-using-the.png|thumb|300px|The Metropolis-Hastings algorithm sampling a [[Normal distribution|normal]] one-dimensional [[Posterior probability|posterior]] probability distribution.]]
 
In [[statistics]] and [[statistical physics]], the '''Metropolis–Hastings algorithm''' is a [[Markov chain Monte Carlo]] (MCMC) method for obtaining a sequence of [[pseudo-random number sampling|random samples]] from a [[probability distribution]] from which direct sampling is difficult. ThisNew samples are added to the sequence in two steps: first a new sample is proposed based on the previous sample, then the proposed sample is either added to the sequence or rejected depending on the value of the probability distribution at that point. The resulting sequence can be used to approximate the distribution (e.g. to generate a [[histogram]]) or to [[Monte Carlo integration|compute an integral]] (e.g. an [[expected value]]).

Metropolis–Hastings and other MCMC algorithms are generally used for sampling from multi-dimensional distributions, especially when the number of dimensions is high. For single-dimensional distributions, there are usually other methods (e.g. [[adaptive rejection sampling]]) that can directly return independent samples from the distribution, and these are free from the problem of [[autocorrelation|autocorrelated]] samples that is inherent in MCMC methods.
 
==History==
The algorithm is named in part for [[Nicholas Metropolis]], the first coauthor of a 1953 paper, entitled ''[[Equation of State Calculations by Fast Computing Machines]]'', with [[Arianna W. Rosenbluth]], [[Marshall Rosenbluth]], [[Augusta H. Teller]] and [[Edward Teller]]. For many years the algorithm was known simply as the ''Metropolis algorithm''.<ref>{{Cite book |lastlast1=Kalos |firstfirst1=Malvin H. |title=Monte Carlo Methods Volume I: Basics |last2=Whitlock |first2=Paula A. |publisher=Wiley |year=1986 |___location=New York |pages=78-8878–88}}</ref><ref>{{Cite journal |last=Tierney |first=Luke |date=1994 |title=Markov chains for exploring posterior distributions |url=https://projecteuclid.org/journals/annals-of-statistics/volume-22/issue-4/Markov-Chains-for-Exploring-Posterior-Distributions/10.1214/aos/1176325750.full |journal=The Annals of Statistics |volume=22 |issue=4 |pages=1701-17621701–1762|doi=10.1214/aos/1176325750 }}</ref> The paper proposed the algorithm for the case of symmetrical proposal distributions, but in 1970, [[W.K. Hastings]] extended it to the more general case.<ref name=Hastings/> The generalized method was eventually identified by both names, although the first use of the term "Metropolis-Hastings algorithm" is unclear.
 
Some controversy exists with regard to credit for development of the Metropolis algorithm. Metropolis, who was familiar with the computational aspects of the method, had coined the term "Monte Carlo" in an earlier article with [[Stanisław Ulam]], and led the group in the Theoretical Division that designed and built the [[MANIAC I]] computer used in the experiments in 1952. However, prior to 2003 there was no detailed account of the algorithm's development. Shortly before his death, [[Marshall Rosenbluth]] attended a 2003 conference at LANL marking the 50th anniversary of the 1953 publication. At this conference, Rosenbluth described the algorithm and its development in a presentation titled "Genesis of the Monte Carlo Algorithm for Statistical Mechanics".<ref name=Rosenbluth/> Further historical clarification is made by Gubernatis in a 2005 journal article<ref name=Gubernatis/> recounting the 50th anniversary conference. Rosenbluth makes it clear that he and his wife Arianna did the work, and that Metropolis played no role in the development other than providing computer time.
 
This contradicts an account by Edward Teller, who states in his memoirs that the five authors of the 1953 article worked together for "days (and nights)".<ref name=Teller/> In contrast, the detailed account by Rosenbluth credits Teller with a crucial but early suggestion to "take advantage of [[statistical mechanics]] and take ensemble averages instead of following detailed [[kinematics]]". This, says Rosenbluth, started him thinking about the generalized Monte Carlo approach – a topic which he says he had discussed often with [[John von Neumann|John Von Neumann]]. Arianna Rosenbluth recounted (to Gubernatis in 2003) that Augusta Teller started the computer work, but that Arianna herself took it over and wrote the code from scratch. In an oral history recorded shortly before his death,<ref name=Barth/> Rosenbluth again credits Teller with posing the original problem, himself with solving it, and Arianna with programming the computer.
Some controversy exists with regard to credit for development of the Metropolis algorithm. Metropolis, who was familiar with the computational aspects of the method, had coined the term "Monte Carlo" in an earlier article with [[Stanisław Ulam]], and led the group in the Theoretical Division that designed and built the [[MANIAC I]] computer used in the experiments in 1952. However, prior to 2003 there was no detailed account of the algorithm's development. Shortly before his death, [[Marshall Rosenbluth]] attended a 2003 conference at LANL marking the 50th anniversary of the 1953 publication. At this conference, Rosenbluth described the algorithm and its development in a presentation titled "Genesis of the Monte Carlo Algorithm for Statistical Mechanics".<ref name=Rosenbluth/> Further historical clarification is made by Gubernatis in a 2005 journal article<ref name=Gubernatis/> recounting the 50th anniversary conference. Rosenbluth makes it clear that he and his wife Arianna did the work, and that Metropolis played no role in the development other than providing computer time.
 
==Description==
This contradicts an account by Edward Teller, who states in his memoirs that the five authors of the 1953 article worked together for "days (and nights)".<ref name=Teller/> In contrast, the detailed account by Rosenbluth credits Teller with a crucial but early suggestion to "take advantage of [[statistical mechanics]] and take ensemble averages instead of following detailed [[kinematics]]". This, says Rosenbluth, started him thinking about the generalized Monte Carlo approach – a topic which he says he had discussed often with [[John von Neumann|John Von Neumann]]. Arianna Rosenbluth recounted (to Gubernatis in 2003) that Augusta Teller started the computer work, but that Arianna herself took it over and wrote the code from scratch. In an oral history recorded shortly before his death,<ref name=Barth/> Rosenbluth again credits Teller with posing the original problem, himself with solving it, and Arianna with programming the computer.
The Metropolis–Hastings algorithm can draw samples from any [[probability distribution]] with [[probability density]] <math>P(x)</math>, provided that we know a function <math>f(x)</math> proportional to the [[Probability density function|density]] <math>P</math> and the values of <math>f(x)</math> can be calculated. The requirement that <math>f(x)</math> must only be proportional to the density, rather than exactly equal to it, makes the Metropolis–Hastings algorithm particularly useful, because calculatingit removes the necessaryneed to calculate the density's normalization factor, which is often extremely difficult in practice.
 
The Metropolis–Hastings algorithm generates a sequence of sample values in such a way that, as more and more sample values are produced, the distribution of values more closely approximates the desired distribution. These sample values are produced iteratively in such a way, withthat the distribution of the next sample being dependentdepends only on the current sample value, thuswhich makingmakes the sequence of samples into a [[Markov chain]]. Specifically, at each iteration, the algorithm picksproposes a candidate for the next sample value based on the current sample value. Then, with some probability, the candidate is either accepted, in which case the candidate value is used in the next iteration, or it is rejected in which case the candidate value is discarded, and the current value is reused in the next iteration. The probability of acceptance is determined by comparing the values of the function <math>f(x)</math> of the current and candidate sample values with respect to the desired distribution.
==Intuition==
The Metropolis–Hastings algorithm can draw samples from any [[probability distribution]] with [[probability density]] <math>P(x)</math>, provided that we know a function <math>f(x)</math> proportional to the [[Probability density function|density]] <math>P</math> and the values of <math>f(x)</math> can be calculated. The requirement that <math>f(x)</math> must only be proportional to the density, rather than exactly equal to it, makes the Metropolis–Hastings algorithm particularly useful, because calculating the necessary normalization factor is often extremely difficult in practice.
 
The method used to propose new candidates is characterized by the probability distribution <math>g(x\mid y)</math> (sometimes written <math>Q(x\mid y)</math>) of a new proposed sample <math>x</math> given the previous sample <math>y</math>. This is called the ''proposal density'', ''proposal function'', or ''jumping distribution''. A common choice for <math>g(x\mid y)</math> is a [[Gaussian distribution]] centered at <math>y</math>, so that points closer to <math>y</math> are more likely to be visited next, making the sequence of samples into a [[Gaussian random walk]]. In the original paper by Metropolis et al. (1953), <math>g(x\mid y)</math> was suggested to be a uniform distribution limited to some maximum distance from <math>y</math>. More complicated proposal functions are also possible, such as those of [[Hamiltonian Monte Carlo]], [[Langevin Monte Carlo]], or [[preconditioned Crank–Nicolson]].
The Metropolis–Hastings algorithm generates a sequence of sample values in such a way that, as more and more sample values are produced, the distribution of values more closely approximates the desired distribution. These sample values are produced iteratively, with the distribution of the next sample being dependent only on the current sample value, thus making the sequence of samples into a [[Markov chain]]. Specifically, at each iteration, the algorithm picks a candidate for the next sample value based on the current sample value. Then, with some probability, the candidate is either accepted, in which case the candidate value is used in the next iteration, or it is rejected in which case the candidate value is discarded, and current value is reused in the next iteration. The probability of acceptance is determined by comparing the values of the function <math>f(x)</math> of the current and candidate sample values with respect to the desired distribution.
 
For the purpose of illustration, the Metropolis algorithm, a special case of the Metropolis–Hastings algorithm where the proposal function is symmetric, is described below.
Line 22 ⟶ 26:
;Metropolis algorithm (symmetric proposal distribution):
Let <math>f(x)</math> be a function that is proportional to the desired probability density function <math>P(x)</math> (a.k.a. a target distribution){{efn|In the original paper by Metropolis et al. (1953), <math>f</math> was taken to be the [[Boltzmann distribution]] as the specific application considered was [[Monte Carlo integration]] of [[equation of state|equations of state]] in [[physical chemistry]]; the extension by Hastings generalized to an arbitrary distribution <math>f</math>.}}.
# Initialization: Choose an arbitrary point <math>x_t</math> to be the first observation in the sample and choose ana arbitraryproposal probability densityfunction <math>g(x\mid y)</math> (sometimes written <math>Q(x\mid y)</math>) that suggests a candidate for the next sample value <math>x</math>, given the previous sample value <math>y</math>. In this section, <math>g</math> is assumed to be symmetric; in other words, it must satisfy <math>g(x\mid y) = g(y\mid x)</math>. A usual choice is to let <math>g(x\mid y)</math> be a [[Gaussian distribution]] centered at <math>y</math>, so that points closer to <math>y</math> are more likely to be visited next, making the sequence of samples into a [[random walk]]{{efn|In the original paper by Metropolis et al. (1953), <math>g(x\mid y)</math> was suggested to be a random translation with uniform density over some prescribed range.}}. The function <math>g</math> is referred to as the ''proposal density'' or ''jumping distribution''.
# For each iteration ''t'':
#* ''GeneratePropose'' a candidate <math>x'</math> for the next sample by picking from the distribution <math>g(x'\mid x_t)</math>.
#* ''Calculate'' the ''acceptance ratio'' <math>\alpha = f(x')/f(x_t)</math>, which will be used to decide whether to accept or reject the candidate{{efn|In the original paper by Metropolis et al. (1953), <math>f</math> was actually the [[Boltzmann distribution]], as it was applied to physical systems in the context of [[statistical mechanics]] (e.g., a maximal-entropy distribution of microstates for a given temperature at thermal equilibrium). Consequently, the acceptance ratio was itself an exponential of the difference in the parameters of the numerator and denominator of this ratio.}}. Because ''f'' is proportional to the density of ''P'', we have that <math>\alpha = f(x')/f(x_t) = P(x')/P(x_t)</math>.
#* ''Accept or reject'':
Line 31 ⟶ 35:
#** If <math>u > \alpha</math>, then ''reject'' the candidate and set <math>x_{t+1} = x_t</math> instead.
 
This algorithm proceeds by randomly attempting to move about the sample space, sometimes accepting the moves and sometimes remaining in place. <math>P(x)</math> at specific point <math>x</math> is proportional to the iterations spent on the point by the algorithm. Note that the acceptance ratio <math>\alpha</math> indicates how probable the new proposed sample is with respect to the current sample, according to the distribution whose density is <math>P(x)</math>. If we attempt to move to a point that is more probable than the existing point (i.e. a point in a higher-density region of <math>P(x)</math> corresponding to an <math>\alpha > 1 \ge u</math>), we will always accept the move. However, if we attempt to move to a less probable point, we will sometimes reject the move, and the larger the relative drop in probability, the more likely we are to reject the new point. Thus, we will tend to stay in (and return large numbers of samples from) high-density regions of <math>P(x)</math>, while only occasionally visiting low-density regions. Intuitively, this is why this algorithm works and returns samples that follow the desired distribution with density <math>P(x)</math>.
 
Compared with an algorithm like [[adaptive rejection sampling]]<ref name=":0">{{Cite journal |lastlast1=Gilks |firstfirst1=W. R. |last2=Wild |first2=P. |date=1992-01-01 |title=Adaptive Rejection Sampling for Gibbs Sampling |journal=Journal of the Royal Statistical Society. Series C (Applied Statistics) |volume=41 |issue=2 |pages=337–348 |doi=10.2307/2347565 |jstor=2347565}}</ref> that directly generates independent samples from a distribution, Metropolis–Hastings and other MCMC algorithms have a number of disadvantages:
* The samples are [[autocorrelation|autocorrelated]]. Even though over the long term they do correctly follow <math>P(x)</math>, a set of nearby samples will be correlated with each other and not correctly reflect the distribution. This means that effective sample sizes can be significantly lower than the number of samples actually taken, leading to large errors.
* Although the Markov chain eventually converges to the desired distribution, the initial samples may follow a very different distribution, especially if the starting point is in a region of low density. As a result, a ''burn-in'' period is typically necessary,<ref>{{Cite book |title=Bayesian data analysis |date=2004 |publisher=Chapman & Hall / CRC |others=Gelman, Andrew |isbn=978-1584883883 |edition=2nd |___location=Boca Raton, Fla. |oclc=51991499}}</ref> where an initial number of samples are thrown away.
 
On the other hand, most simple [[rejection sampling]] methods suffer from the "[[curse of dimensionality]]", where the probability of rejection increases exponentially as a function of the number of dimensions. Metropolis–Hastings, along with other MCMC methods, do not have this problem to such a degree, and thus are often the only solutions available when the number of dimensions of the distribution to be sampled is high. As a result, MCMC methods are often the methods of choice for producing samples from [[hierarchical Bayesian model]]s and other high-dimensional statistical models used nowadays in many disciplines.
 
In [[multivariate distribution|multivariate]] distributions, the classic Metropolis–Hastings algorithm as described above involves choosing a new multi-dimensional sample point. When the number of dimensions is high, finding the suitable jumping distribution to use can be difficult, as the different individual dimensions behave in very different ways, and the jumping width (see above) must be "just right" for all dimensions at once to avoid excessively slow mixing. An alternative approach that often works better in such situations, known as [[Gibbs sampling]], involves choosing a new sample for each dimension separately from the others, rather than choosing a sample for all dimensions at once. That way, the problem of sampling from potentially high-dimensional space will be reduced to a collection of problems to sample from small dimensionality.<ref>{{Cite journal |last=Lee |first=Se Yoon |year=2021 |title=Gibbs sampler and coordinate ascent variational inference: A set-theoretical review |journal=Communications in Statistics - Theory and Methods |volume=51 |issue=6 |pages=1549–1568 |arxiv=2008.01006 |doi=10.1080/03610926.2021.1921214 |s2cid=220935477}}</ref> This is especially applicable when the multivariate distribution is composed of a set of individual [[random variable]]s in which each variable is conditioned on only a small number of other variables, as is the case in most typical [[hierarchical Bayesian model|hierarchical models]]. The individual variables are then sampled one at a time, with each variable conditioned on the most recent values of all the others. Various algorithms can be used to choose these individual samples, depending on the exact form of the multivariate distribution: some possibilities are the [[adaptive rejection sampling]] methods,<ref name=":0" /> the adaptive rejection Metropolis sampling algorithm,<ref>{{Cite journal |lastlast1=Gilks |firstfirst1=W. R. |last2=Best |first2=N. G. |author-link2=Nicky Best |last3=Tan |first3=K. K. C. |date=1995-01-01 |title=Adaptive Rejection Metropolis Sampling within Gibbs Sampling |journal=Journal of the Royal Statistical Society. Series C (Applied Statistics) |volume=44 |issue=4 |pages=455–472 |doi=10.2307/2986138 |jstor=2986138}}</ref> a simple one-dimensional Metropolis–Hastings step, or [[slice sampling]].
 
==Formal derivation==
Line 83 ⟶ 87:
## ''Increment'': set <math>t = t + 1</math>.
 
Provided that specified conditions are met, the empirical distribution of saved states <math>x_0, \ldots, x_T</math> will approach <math>P(x)</math>. The number of iterations (<math>T</math>) required to effectively estimate <math>P(x)</math> depends on the number of factors, including the relationship between <math>P(x)</math> and the proposal distribution and the desired accuracy of estimation.<ref>Raftery, Adrian E., and Steven Lewis. "How Many Iterations in the Gibbs Sampler?" ''In Bayesian Statistics 4''. 1992.</ref> For distribution on discrete state spaces, it has to be of the order of the [[autocorrelation]] time of the Markov process.<ref name=Newman_Barkema/>
 
It is important to notice that it is not clear, in a general problem, which distribution <math>g(x' \mid x)</math> one should use or the number of iterations necessary for proper estimation; both are free parameters of the method, which must be adjusted to the particular problem in hand.
Line 133 ⟶ 137:
</math>
 
The Markov chain is started from an arbitrary initial value <math>x_0</math>, and the algorithm is run for many iterations until this initial state is "forgotten". These samples, which are discarded, are known as ''burn-in''. The remaining set of accepted values of <math>x</math> represent a [[Sample (statistics)|sample]] from the distribution <math>P(x)</math>.
 
The algorithm works best if the proposal density matches the shape of the target distribution <math>P(x)</math>, from which direct sampling is difficult, that is <math>g(x' \mid x_t) \approx P(x')</math>.
If a Gaussian proposal density <math>g</math> is used, the variance parameter <math>\sigma^2</math> has to be tuned during the burn-in period.
This is usually done by calculating the ''acceptance rate'', which is the fraction of proposed samples that is accepted in a window of the last <math>N</math> samples.
The desired acceptance rate depends on the target distribution, however it has been shown theoretically that the ideal acceptance rate for a one-dimensional Gaussian distribution is about 50%, decreasing to about 23% for an <math>N</math>-dimensional Gaussian target distribution.<ref name=Roberts/> These guidelines can work well when sampling from sufficiently regular Bayesian posteriors as they often follow a multivariate normal distribution as can be established using the [[Bernstein–von Mises theorem]].<ref>{{Cite journal |lastlast1=Schmon |firstfirst1=Sebastian M. |last2=Gagnon |first2=Philippe |date=2022-04-15 |title=Optimal scaling of random walk Metropolis algorithms using Bayesian large-sample asymptotics |journal=Statistics and Computing |language=en |volume=32 |issue=2 |pages=28 |doi=10.1007/s11222-022-10080-8 |issn=0960-3174 |pmc=8924149 |pmid=35310543}}</ref>
 
If <math>\sigma^2</math> is too small, the chain will ''mix slowly'' (i.e., the acceptance rate will be high, but successive samples will move around the space slowly, and the chain will converge only slowly to <math>P(x)</math>). On the other hand,
if <math>\sigma^2</math> is too large, the acceptance rate will be very low because the proposals are likely to land in regions of much lower probability density, so <math>a_1</math> will be very small, and again the chain will converge very slowly. One typically tunes the proposal distribution so that the algorithms accepts on the order of 30% of all samples – in line with the theoretical estimates mentioned in the previous paragraph.
 
Line 151 ⟶ 155:
 
==See also==
* [[Detailed balance]]
* [[Genetic algorithm]]s
* [[Gibbs sampling]]
* [[Hamiltonian Monte Carlo]]
* [[Mean-field particle methods]]
* [[Metropolis-adjusted Langevin algorithm]]
* [[Metropolis light transport]]
* [[Multiple-try Metropolis]]
* [[Parallel tempering]]
* [[Preconditioned Crank–Nicolson algorithm]]
* [[Particle filter|Sequential Monte Carlo]]
* [[Simulated annealing]]
Line 171 ⟶ 170:
<ref name="Teller">Teller, Edward. ''Memoirs: A Twentieth-Century Journey in Science and Politics''. [[Perseus Publishing]], 2001, p. 328</ref>
<ref name="Barth">Rosenbluth, Marshall. [https://www.aip.org/history-programs/niels-bohr-library/oral-histories/28636-1 "Oral History Transcript"]. American Institute of Physics</ref>
<ref name="Gubernatis">{{Cite journal |last=J.E. Gubernatis |year=2005 |title=Marshall Rosenbluth and the Metropolis Algorithm |url=https://zenodo.org/record/1231899 |journal=[[Physics of Plasmas]] |volume=12 |issue=5 |pagesarticle-number=057303 |bibcode=2005PhPl...12e7303G |doi=10.1063/1.1887186}}</ref>
<ref name="Rosenbluth">{{Cite journal |last=M.N. Rosenbluth |year=2003 |title=Genesis of the Monte Carlo Algorithm for Statistical Mechanics |journal=[[AIP Conference Proceedings]] |volume=690 |pages=22–30 |bibcode=2003AIPC..690...22R |doi=10.1063/1.1632112}}</ref>
<!--<ref name="Dyson">{{Cite journal |last=F. Dyson |year=2006 |title=Marshall N. Rosenbluth |journal=[[Proceedings of the American Philosophical Society]] |volume=250 |pages=404}}</ref>-->
<ref name="Roberts">{{Cite journal |lastlast1=Roberts |firstfirst1=G.O. |last2=Gelman |first2=A. |last3=Gilks |first3=W.R. |year=1997 |title=Weak convergence and optimal scaling of random walk Metropolis algorithms |url=http://www.stat.columbia.edu/~gelman/research/published/theory7.ps |journal=[[Ann. Appl. Probab.]] |volume=7 |issue=1 |pages=110&ndash;120 |citeseerx=10.1.1.717.2582 |doi=10.1214/aoap/1034625254}}</ref>
<ref name="Roberts_Casella">{{Cite book |lastlast1=Robert |firstfirst1=Christian |url=https://archive.org/details/springer_10.1007-978-1-4757-4145-2 |title=Monte Carlo Statistical Methods |last2=Casella |first2=George |publisher=Springer |year=2004 |isbn=978-0387212395}}</ref>
<ref name="Newman_Barkema">{{Cite book |lastlast1=Newman |firstfirst1=M. E. J. |title=Monte Carlo Methods in Statistical Physics |last2=Barkema |first2=G. T. |publisher=Oxford University Press |year=1999 |isbn=978-0198517979 |___location=USA}}</ref>
}}
==Notes==
Line 184 ⟶ 183:
* [[Bernd A. Berg]]. ''Markov Chain Monte Carlo Simulations and Their Statistical Analysis''. Singapore, [[World Scientific]], 2004.
*Chib, Siddhartha; Greenberg, Edward (1995). [https://www.jstor.org/stable/2684568 "Understanding the Metropolis&ndash;Hastings Algorithm"]. ''[[The American Statistician]]'', 49(4), 327&ndash;335.
* [http://www.tandfonline.com/doi/abs/10.1080/03610918.2013.777455#.VOk8J1PF9_c David D. L. Minh and Do Le Minh. "Understanding the Hastings Algorithm." Communications in Statistics - Simulation and Computation, 44:2 332-349332–349, 2015]
* Bolstad, William M. (2010) ''Understanding Computational Bayesian Statistics'', [[John Wiley & Sons]] {{ISBN|0-470-04609-0}}