Content deleted Content added
Adamopoulos (talk | contribs) m →History |
mNo edit summary |
||
Line 92:
It is important to notice that it is not clear, in a general problem, which distribution <math>g(x' \mid x)</math> one should use or the number of iterations necessary for proper estimation; both are free parameters of the method, which must be adjusted to the particular problem in hand.
==
{{main|Monte Carlo integration}}
Line 111 ⟶ 110:
==Step-by-step instructions==
Suppose that the most recent value sampled is <math>x_t</math>. To follow the Metropolis–Hastings algorithm, we next draw a new proposal state <math>x'</math> with probability density <math>g(x' \mid x_t)</math> and calculate a value
Line 138 ⟶ 136:
</math>
The Markov chain is started from an arbitrary initial value <math>x_0</math>, and the algorithm is run for many iterations until this initial state is "forgotten". These samples, which are discarded, are known as ''burn-in''. The remaining set of accepted values of <math>x</math> represent a [[Sample (statistics)|sample]] from the distribution <math>P(x)</math>.
The algorithm works best if the proposal density matches the shape of the target distribution <math>P(x)</math>, from which direct sampling is difficult, that is <math>g(x' \mid x_t) \approx P(x')</math>.
Line 155 ⟶ 152:
* [[Genetic algorithm]]s
* [[Gibbs sampling]]
* [[Mean
* [[Metropolis-adjusted Langevin algorithm]]
* [[Metropolis light transport]]
|