Stochastic approximation

This is an old revision of this page, as edited by Stochastics (talk | contribs) at 22:29, 18 December 2009. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Stochastic approximation methods are a family of iterative stochastic optimization algorithms that attempt to find zeroes or extrema of functions which cannot be computed directly, but only estimated via noisy observations. The first, and prototypical, algorithms of this kind were the Robbins-Monro and Kiefer-Wolfowitz algorithms.

Robbins-Monro algorithm

In the Robbins-Monro algorithm, introduced in 1951[1], one has a function   for which one wishes to find the value of  ,  , satisfying  . However, what is observable is not  , but rather a random variable   such that  . The algorithm is then to construct a sequence   which satisfies

 .

Here,   is a sequence of positive step sizes. Robbins and Monro proved that, if   is uniformly bounded,   is nondecreasing,   exists and is positive, and if   satisfies a set of bounds (fulfilled if one takes  ), then   converges in   (and hence also in probability) to  .[1], Theorem 2. In general, the   need not equal  . However, to ensure convergence, they should converge to zero, and in order to average out the noise in  , they should converge slowly.

Kiefer-Wolfowitz algorithm

In the Kiefer-Wolfowitz algorithm[2], introduced a year after the Robbins-Monro algorithm, one wishes to find the maximum,  , of the unknown   and constructs a sequence   such that

 .

Here,   is a sequence of positive step sizes which serve the same function as in the Robbins-Monro algorithm, and   is a sequence of positive step sizes which are used to estimate, via finite differences, the derivative of  . Kiefer and Wolfowitz showed that, if   and   satisfy various bounds (fulfilled by taking  ,  ), and   and   satisfy some technical conditions, then the sequence   converges in probability to  .

Subsequent developments

An extensive theoretical literature has grown up around these algorithms, concerning conditions for convergence, rates of convergence, multivariate and other generalizations, proper choice of step size, possible noise models, and so on.[3],[4] These methods are also applied in control theory, in which case the unknown function which we wish to optimize or find the zero of may vary in time. In this case, the step size   should not converge to zero but should be chosen so as to track the function.[3], 2nd ed., chapter 3

See also


References

  1. ^ a b A Stochastic Approximation Method, Herbert Robbins and Sutton Monro, Annals of Mathematical Statistics 22, #3 (September 1951), pp. 400–407.
  2. ^ Stochastic Estimation of the Maximum of a Regression Function, J. Kiefer and J. Wolfowitz, Annals of Mathematical Statistics 23, #3 (September 1952), pp. 462–466.
  3. ^ a b Stochastic Approximation Algorithms and Applications, Harold J. Kushner and G. George Yin, New York: Springer-Verlag, 1997. ISBN 038794916X; 2nd ed., titled Stochastic Approximation and Recursive Algorithms and Applications, 2003, ISBN 0387008942.
  4. ^ Stochastic Approximation and Recursive Estimation, Mikhail Borisovich Nevel'son and Rafail Zalmanovich Has'minskiĭ, translated by Israel Program for Scientific Translations and B. Silver, Providence, RI: American Mathematical Society, 1973, 1976. ISBN 0821815970.