Content deleted Content added
Merlinsorca (talk | contribs) mNo edit summary Tags: Mobile edit Mobile app edit iOS app edit |
|||
Line 44:
Slice sampling gets its name from the first step: defining a ''slice'' by sampling from an auxiliary variable <math>Y</math>. This variable is sampled from <math>[0, f(x)]</math>, where <math>f(x)</math> is either the [[probability density function]] (PDF) of ''X'' or is at least proportional to its PDF. This defines a slice of ''X'' where <math>f(x) \ge Y</math>. In other words, we are now looking at a region of ''X'' where the probability density is at least <math>Y</math>. Then the next value of ''X'' is sampled uniformly from this slice. A new value of <math>Y</math> is sampled, then ''X'', and so on. This can be visualized as alternatively sampling the y-position and then the x-position of points under PDF, thus the ''X''s are from the desired distribution. The <math>Y</math> values have no particular consequences or interpretations outside of their usefulness for the procedure.
If both the PDF and its inverse are available, and the distribution is unimodal, then finding the slice and sampling from it are simple. If not, a stepping-out procedure can be used to find a region whose endpoints fall outside the slice. Then, a sample can be drawn from the slice using [[rejection sampling]]. Various procedures for this are described in detail by [[Radford M. Neal]].<ref name="radford03"/>
Note that, in contrast to many available methods for generating random numbers from non-uniform distributions, random variates generated directly by this approach will exhibit serial statistical dependence. This is because to draw the next sample, we define the slice based on the value of ''f''(''x'') for the current sample.
|