Slice sampling: Difference between revisions

Content deleted Content added
Lukafree2 (talk | contribs)
m adding more application of slice, and related references
BG19bot (talk | contribs)
m WP:CHECKWIKI error fix for #61. Punctuation goes before References. Do general fixes if a problem exists. -, replaced: → (8) using AWB (11751)
Line 74:
 
*First, a width parameter ''w'' is used to define the area containing the given ''x'' value. Each endpoint of this area is tested to see if it lies outside the given slice. If not, the region is extended in the appropriate direction(s) by ''w'' until the end both endpoints lie outside the slice.
 
*A candidate sample is selected uniformly from within this region. If the candidate sample lies inside of the slice, then it is accepted as the new sample. If it lies outside of the slice, the candidate point becomes the new boundary for the region. A new candidate sample is taken uniformly. The process repeats until the candidate sample is within the slice. (See diagram for a visual example).
 
Line 80 ⟶ 79:
 
== Slice-within-Gibbs sampling ==
In a [[Gibbs sampling|Gibbs sampler]], one needs to draw efficiently from all the full-conditional distributions. When sampling from a full-conditional density is not easy, a single iteration of  slice sampling  or the  Metropolis-Hastings algorithm  can be used within-Gibbs to sample from the variable in question. If the full-conditional density is log-concave, a more efficient alternative is the application of [[Rejection sampling|adaptive rejection sampling]]  (ARS) methods.<ref>{{Cite journal|title = Adaptive Rejection Sampling for Gibbs Sampling|url = http://www.jstor.org/stable/2347565|journal = Journal of the Royal Statistical Society. Series C (Applied Statistics)|date = 1992-01-01|pages = 337-348337–348|volume = 41|issue = 2|doi = 10.2307/2347565|first = W. R.|last = Gilks|first2 = P.|last2 = Wild}}</ref><ref>{{Cite journal|title = A Rejection Technique for Sampling from T-concave Distributions|url = http://doi.acm.org/10.1145/203082.203089|journal = ACM Trans. Math. Softw.|date = 1995-06-01|issn = 0098-3500|pages = 182–193|volume = 21|issue = 2|doi = 10.1145/203082.203089|first = Wolfgang|last = Hörmann}}</ref><ref>{{Cite journal|title = A generalization of the adaptive rejection sampling algorithm|url = http://link.springer.com/article/10.1007/s11222-010-9197-9|journal = Statistics and Computing|date = 2010-08-25|issn = 0960-3174|pages = 633-647633–647|volume = 21|issue = 4|doi = 10.1007/s11222-010-9197-9|language = en|first = Luca|last = Martino|first2 = Joaquín|last2 = Míguez}}</ref> When the ARS techniques cannot be applied (since the full-conditional is non-log-concave), the  '''adaptive rejection Metropolis sampling algorithms'''  are often employed .<ref>{{Cite journal|title = Adaptive Rejection Metropolis Sampling within Gibbs Sampling|url = http://www.jstor.org/stable/2986138|journal = Journal of the Royal Statistical Society. Series C (Applied Statistics)|date = 1995-01-01|pages = 455-472455–472|volume = 44|issue = 4|doi = 10.2307/2986138|first = W. R.|last = Gilks|first2 = N. G.|last2 = Best|first3 = K. K. C.|last3 = Tan}}</ref><ref>{{Cite journal|title = Adaptive rejection Metropolis sampling using Lagrange interpolation polynomials of degree 2|url = http://www.sciencedirect.com/science/article/pii/S016794730800008X|journal = Computational Statistics & Data Analysis|date = 2008-03-15|pages = 3408-34233408–3423|volume = 52|issue = 7|doi = 10.1016/j.csda.2008.01.005|first = Renate|last = Meyer|first2 = Bo|last2 = Cai|first3 = François|last3 = Perron}}</ref><ref>{{Cite journal|title = Independent Doubly Adaptive Rejection Metropolis Sampling Within Gibbs Sampling|url = http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7080917|journal = IEEE Transactions on Signal Processing|date = 2015-06-01|issn = 1053-587X|pages = 3123-31383123–3138|volume = 63|issue = 12|doi = 10.1109/TSP.2015.2420537|first = L.|last = Martino|first2 = J.|last2 = Read|first3 = D.|last3 = Luengo}}</ref>.
 
==Multivariate Methods==
Line 104 ⟶ 103:
 
*We first draw a uniform random value ''y'' from the range of f(x) in order to define our slice(es). Suppose y=0.01.
 
*Next, we set our width parameter ''w'' which we will use to expand our region of consideration. Suppose w=2.
 
*Next, we need an initial value for ''x''. We draw ''x'' from the uniform distribution within the ___domain of f(x) at our current ''y''. Suppose x=2.
 
*Because x=2 and w=2, our current region of interest is bounded by (1,3).
 
*Now, each endpoint of this area is tested to see if it lies outside the given slice. Our right bound lies outside our slice, but the left value does not. We expand the left bound by adding ''w'' to it until it extends past the limit of the slice. After this process, the new bounds of our region of interest are (-4,3).
 
*Next, we take a uniform sample within (-4,3). Suppose this sample yields x=-3.9. Though this sample is within our region of interest, it does not lie within our slice, so we modify the left bound of our region of interest to this point. Now we take a uniform sample from (-3.9,3). This time our sample yields x=1, which is within our slice, and thus is our accepted sample. Had our new ''x'' not been within our slice, we would continue the shrinking/resampling process until a valid ''x'' within bounds is found.