Metropolis–Hastings algorithm: Difference between revisions

Content deleted Content added
History: wikil
Tags: Mobile edit Mobile web edit Advanced mobile edit
Line 34:
 
Compared with an algorithm like [[adaptive rejection sampling]]<ref name=":0">{{Cite journal |last=Gilks |first=W. R. |last2=Wild |first2=P. |date=1992-01-01 |title=Adaptive Rejection Sampling for Gibbs Sampling |journal=Journal of the Royal Statistical Society. Series C (Applied Statistics) |volume=41 |issue=2 |pages=337–348 |doi=10.2307/2347565 |jstor=2347565}}</ref> that directly generates independent samples from a distribution, Metropolis–Hastings and other MCMC algorithms have a number of disadvantages:
* The samples are correlated[[autocorrelation|autocorrelated]]. Even though over the long term they do correctly follow <math>P(x)</math>, a set of nearby samples will be correlated with each other and not correctly reflect the distribution. This means that effective sample sizes can be significantly lower than the number of samples actually taken, leading to large errors.
* Although the Markov chain eventually converges to the desired distribution, the initial samples may follow a very different distribution, especially if the starting point is in a region of low density. As a result, a ''burn-in'' period is typically necessary,<ref>{{Cite book |title=Bayesian data analysis |date=2004 |publisher=Chapman & Hall / CRC |others=Gelman, Andrew |isbn=978-1584883883 |edition=2nd |___location=Boca Raton, Fla. |oclc=51991499}}</ref> where an initial number of samples are thrown away.