Metropolis–Hastings algorithm: Difference between revisions

Content deleted Content added
Line 27:
Let <math>f(x)</math> be a function that is proportional to the desired probability density funtion <math>P(x)</math> (a.k.a. a target distribution).
 
# Initialization: Choose an arbitrary point <math>x_t</math> to be the first observation in the sample and choose an arbitrary probability density <math>g(x\mid y)</math> (sometimes written <math>Q(x\mid y)</math>) that suggests a candidate for the next sample value <math>x</math>, given the previous sample value <math>y</math>. In this section, <math>g</math> is assumed to be symmetric; in other words, it must satisfy <math>g(x\mid y) = g(y\mid x)</math>. A usual choice is to let <math>g(x\mid y)</math> be a [[Gaussian distribution]] centered at <math>y</math>, so that points closer to <math>y</math> are more likely to be visited next, making the sequence of samples into a [[random walk]] {{efn|In the original paper however, <math>g(x\mid y)</math> was actually the [[Boltzmann distribution]], as it was applied to physical systems in the context of [[statistical mechanics]]}}. The function <math>g</math> is referred to as the ''proposal density'' or ''jumping distribution''.
# For each iteration ''t'':
#* ''Generate'' a candidate <math>x'</math> for the next sample by picking from the distribution <math>g(x'\mid x_t)</math>.