Slice sampling: Difference between revisions

Content deleted Content added
OAbot (talk | contribs)
m Open access bot: hdl, doi added to citation with #oabot.
m Fixes missing end tags lints
 
(9 intermediate revisions by 7 users not shown)
Line 1:
{{Short description|Algorithm}}
'''Slice sampling''' is a type of [[Markov chain Monte Carlo]] [[algorithm]] for [[pseudo-random number sampling]], i.e. for drawing random samples from a statistical distribution. The method is based on the observation that to sample a [[random variable]] one can sample uniformly from the region under the graph of its density function.<ref>Damlen, P., Wakefield, J., & Walker, S. (1999). Gibbs sampling for Bayesian non‐conjugate and hierarchical models by using auxiliary variables. Journal of the Royal Statistical Society, Series B (Statistical Methodology), 61(2), 331-344.Chicago</ref><ref name="radford03">{{cite journal
|first=Radford M. |last=Neal
Line 43 ⟶ 44:
Slice sampling gets its name from the first step: defining a ''slice'' by sampling from an auxiliary variable <math>Y</math>. This variable is sampled from <math>[0, f(x)]</math>, where <math>f(x)</math> is either the [[probability density function]] (PDF) of ''X'' or is at least proportional to its PDF. This defines a slice of ''X'' where <math>f(x) \ge Y</math>. In other words, we are now looking at a region of ''X'' where the probability density is at least <math>Y</math>. Then the next value of ''X'' is sampled uniformly from this slice. A new value of <math>Y</math> is sampled, then ''X'', and so on. This can be visualized as alternatively sampling the y-position and then the x-position of points under PDF, thus the ''X''s are from the desired distribution. The <math>Y</math> values have no particular consequences or interpretations outside of their usefulness for the procedure.
 
If both the PDF and its inverse are available, and the distribution is unimodal, then finding the slice and sampling from it are simple. If not, a stepping-out procedure can be used to find a region whose endpoints fall outside the slice. Then, a sample can be drawn from the slice using [[rejection sampling]]. Various procedures for this are described in detail by [[Radford M. Neal]].<ref name="radford03"/>
 
Note that, in contrast to many available methods for generating random numbers from non-uniform distributions, random variates generated directly by this approach will exhibit serial statistical dependence. This is because to draw the next sample, we define the slice based on the value of ''f''(''x'') for the current sample.
Line 70 ⟶ 71:
In practice, sampling from a horizontal slice of a multimodal distribution is difficult. There is a tension between obtaining a large sampling region and thereby making possible large moves in the distribution space, and obtaining a simpler sampling region to increase efficiency. One option for simplifying this process is regional expansion and contraction.
 
* First, a width parameter ''w'' is used to define the area containing the given ''x'' value. Each endpoint of this area is tested to see if it lies outside the given slice. If not, the region is extended in the appropriate direction(s) by ''w'' until the end both endpoints lie outside the slice.
* A candidate sample is selected uniformly from within this region. If the candidate sample lies inside of the slice, then it is accepted as the new sample. If it lies outside of the slice, the candidate point becomes the new boundary for the region. A new candidate sample is taken uniformly. The process repeats until the candidate sample is within the slice. (See diagram for a visual example).
 
Line 76 ⟶ 77:
 
== Slice-within-Gibbs sampling ==
In a [[Gibbs sampling|Gibbs sampler]], one needs to draw efficiently from all the full-conditional distributions. When sampling from a full-conditional density is not easy, a single iteration of slice sampling or the Metropolis-Hastings algorithm can be used within-Gibbs to sample from the variable in question. If the full-conditional density is log-concave, a more efficient alternative is the application of [[Rejection sampling|adaptive rejection sampling]] (ARS) methods.<ref>{{Cite journal|title = Adaptive Rejection Sampling for Gibbs Sampling|jstor = 2347565|journal = Journal of the Royal Statistical Society. Series C (Applied Statistics)|date = 1992-01-01|pages = 337–348|volume = 41|issue = 2|doi = 10.2307/2347565|firstfirst1 = W. R.|lastlast1 = Gilks|first2 = P.|last2 = Wild}}</ref><ref>{{Cite journal|title = A Rejection Technique for Sampling from T-concave Distributions|journal = ACM Trans. Math. Softw.|date = 1995-06-01|issn = 0098-3500|pages = 182–193|volume = 21|issue = 2|doi = 10.1145/203082.203089|first = Wolfgang|last = Hörmann|citeseerx = 10.1.1.56.6055}}</ref><ref>{{Cite journal|title = A generalization of the adaptive rejection sampling algorithm|journal = Statistics and Computing|date = 2010-08-25|issn = 0960-3174|pages = 633–647|volume = 21|issue = 4|doi = 10.1007/s11222-010-9197-9|first = Luca|last = Martino|first2 = Joaquín|last2 = Míguez|hdl = 10016/16624|hdl-accesss2cid = free592740}}</ref> When the ARS techniques cannot be applied (since the full-conditional is non-log-concave), the '''adaptive rejection Metropolis sampling algorithms''' are often employed.<ref>{{Cite journal|title = Adaptive Rejection Metropolis Sampling within Gibbs Sampling|jstor = 2986138|journal = Journal of the Royal Statistical Society. Series C (Applied Statistics)|date = 1995-01-01|pages = 455–472|volume = 44|issue = 4|doi = 10.2307/2986138|firstfirst1 = W. R.|lastlast1 = Gilks|first2 = N. G.|last2 = Best|author2-link= Nicky Best |first3 = K. K. C.|last3 = Tan}}</ref><ref>{{Cite journal|title = Adaptive rejection Metropolis sampling using Lagrange interpolation polynomials of degree 2|journal = Computational Statistics & Data Analysis|date = 2008-03-15|pages = 3408–3423|volume = 52|issue = 7|doi = 10.1016/j.csda.2008.01.005|firstfirst1 = Renate|lastlast1 = Meyer|first2 = Bo|last2 = Cai|first3 = François|last3 = Perron}}</ref><ref>{{Cite journal|title = Independent Doubly Adaptive Rejection Metropolis Sampling Within Gibbs Sampling|journal = IEEE Transactions on Signal Processing|date = 2015-06-01|issn = 1053-587X|pages = 3123–3138|volume = 63|issue = 12|doi = 10.1109/TSP.2015.2420537|first = L.|last = Martino|first2 = J.|last2 = Read|first3 = D.|last3 = Luengo|arxiv = 1205.5494}}</ref>
 
==Multivariate methods==
Line 86 ⟶ 87:
 
===Hyperrectangle slice sampling===
This method adapts the univariate algorithm to the multivariate case by substituting a hyperrectangle for the one-dimensional ''w'' region used in the original. The hyperrectangle ''H'' is initialized to a random position over the slice. ''H'' is then shrunkenshrunk as points from it are rejected.
 
===Reflective slice sampling===
Line 114 ⟶ 115:
An implementation in the [[Macsyma]] language is:
 
<sourcesyntaxhighlight lang="smalltalk">
slice(x) := block([y, alpha],
y:random(exp(-x^2 / 2.0) / sqrt(2.0 * dfloat(%pi))),
Line 120 ⟶ 121:
x:signum(random()) * random(alpha)
);
</syntaxhighlight>
</source>
 
==See also==