Stochastic approximation: Difference between revisions

Content deleted Content added
 
(245 intermediate revisions by 77 users not shown)
Line 1:
{{Short description|Family of iterative methods}}
'''Stochastic approximation''' methods are a family of iterative [[stochastic optimization]] [[algorithm]]s that attempt to find zeroes or extrema of functions which cannot be computed directly, but only estimated via noisy observations. Mathematically, this refers to solving:
'''Stochastic approximation''' methods are a family of [[iterative methods]] typically used for [[root-finding]] problems or for [[optimization problem|optimization]] problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving linear systems when the collected data is corrupted by noise, or for approximating [[Extremum|extreme values]] of functions which cannot be computed directly, but only estimated via noisy observations.
:<math> \min_{x \in \Theta}\; f(x) = \mathbb E[F(x,\xi)] </math>
where the objective is to find the parameter <math>x \in \Theta</math>, which minimizes the above function for some unknown random variable, <math>\xi </math>. It is assumed that while the ___domain <math>\Theta \subset \mathbb R^d </math> is known, where <math>d</math> is the dimension of the parameter <math>x </math>, <math>f(x)</math> cannot be computed exactly, but instead approximated via simulation.
 
In a nutshell, stochastic approximation algorithms deal with a function of the form <math display="inline"> f(\theta) = \operatorname E_{\xi} [F(\theta,\xi)] </math>
The first, and prototypical, algorithms of this kind were the '''Robbins-Monro''' and '''Kiefer-Wolfowitz''' algorithms.
which is the [[expected value]] of a function depending on a [[random variable]] <math display="inline">\xi </math>. The goal is to recover properties of such a function <math display="inline">f</math> without evaluating it directly. Instead, stochastic approximation algorithms use random samples of <math display="inline">F(\theta,\xi)</math> to efficiently approximate properties of <math display="inline">f</math> such as zeros or extrema.
 
Recently, stochastic approximations have found extensive applications in the fields of statistics and machine learning, especially in settings with [[big data]]. These applications range from [[stochastic optimization]] methods and algorithms, to online forms of the [[Expectation–maximization algorithm| EM algorithm]], reinforcement learning via [[Temporal difference learning|temporal differences]], and [[deep learning]], and others.<ref name=":1">{{cite journal |last1=Toulis |first1=Panos |first2=Edoardo |last2=Airoldi|title=Scalable estimation strategies based on stochastic approximations: classical results and new insights |journal=Statistics and Computing |volume=25 |issue=4 |year=2015 |pages=781–795|doi=10.1007/s11222-015-9560-y|pmid=26139959 |pmc=4484776 }}</ref>
==Robbins-Monro algorithm==
Stochastic approximation algorithms have also been used in the social sciences to describe collective dynamics: fictitious play in learning theory and consensus algorithms can be studied using their theory.<ref>{{cite web|last1=Le Ny|first1=Jerome|title=Introduction to Stochastic Approximation Algorithms|url=http://www.professeurs.polymtl.ca/jerome.le-ny/teaching/DP_fall09/notes/lec11_SA.pdf|website=Polytechnique Montreal|publisher=Teaching Notes|access-date=16 November 2016}}</ref>
The [[Herbert Robbins|Robbins]]-Monro algorithm, introduced in 1951<ref name="rm">A Stochastic Approximation Method, Herbert Robbins and Sutton Monro, ''Annals of Mathematical Statistics'' '''22''', #3 (September 1951), pp. 400–407.</ref>, presented a methodology for solving a root finding problem, where the function is represented as an expected value. Let us assume that we have a function <math>M(x)</math>, and a constant <math>\alpha</math>, such that the equation <math>M(x) = \alpha</math> has a unique root at <math>x=\theta</math>. It is assumed that while we cannot directly observe the function <math>M(x)</math>, we can instead obtain measurements of the random variable <math>N(x)</math> with the property of <math>\mathbb E[N(x)] = M(x)</math>. The structure of the algorithm is to then generate iterates of the form
::<math>x_{n+1}=x_n+a_n(\alpha-N(x_n))</math>.
Here, <math>a_1, a_2, \dots</math> is a sequence of positive step-sizes. [[Herbert Robbins|Robbins]] and Monro proved <ref name="rm" /><sup>, Theorem 2</sup> that <math>x_n</math> [[convergence of random variables|converges]] in <math>L^2</math> (and hence also in probability) to <math>x_0</math> provided that:
* <math>N(x)</math> is uniformly bounded,
* <math>M(x)</math> is nondecreasing,
* <math>M'(x_0)</math> exists and is positive, and
* <math>a_n</math> satisfies a certain set of bounds, to converge slowly to zero.
 
The earliest, and prototypical, algorithms of this kind are the '''Robbins–Monro''' and '''Kiefer–Wolfowitz''' algorithms introduced respectively in 1951 and 1952.
The last condition is fulfilled for example by taking <math>a_n=1/n</math>; other series are possible but in order to average out the noise in <math>N(x)</math>, <math>a_n</math> must converge slowly.
 
==Kiefer-WolfowitzRobbins–Monro algorithm==
The Robbins–Monro algorithm, introduced in 1951 by [[Herbert Robbins]] and [[John U. Monro#Personal life|Sutton Monro]],<ref name="rm">{{Cite journal | last1 = Robbins | first1 = H. | author-link = Herbert Robbins| last2 = Monro | first2 = S. | doi = 10.1214/aoms/1177729586 | title = A Stochastic Approximation Method | journal = The Annals of Mathematical Statistics | volume = 22 | issue = 3 | pages = 400 | year = 1951 | doi-access = free }}</ref> presented a methodology for solving a root finding problem, where the function is represented as an expected value. Assume that we have a function <math display="inline">M(\theta)</math>, and a constant <math display="inline">\alpha</math>, such that the equation <math display="inline">M(\theta) = \alpha</math> has a unique root at <math display="inline">\theta^*.</math> It is assumed that while we cannot directly observe the function <math display="inline">M(\theta),</math> we can instead obtain measurements of the random variable <math display="inline">N(\theta)</math> where <math display="inline">\operatorname E[N(\theta)] = M(\theta)</math>. The structure of the algorithm is to then generate iterates of the form:
In the Kiefer-Wolfowitz algorithm<ref>Stochastic Estimation of the Maximum of a Regression Function, [[Jack Kiefer (mathematician)|J. Kiefer]] and J. Wolfowitz, ''Annals of Mathematical Statistics'' '''23''', #3 (September 1952), pp. 462–466.</ref>, introduced a year after the Robbins-Monro algorithm, one wishes to find the maximum, <math>x_0</math>, of the unknown <math>M(x)</math>
and constructs a sequence <math>x_1,x_2,\dots</math> such that
::<math>x_{n+1}=x_n+a_n\frac{N(x_n+c_n)-N(x_n-c_n)}{c_n}</math>.
Here, <math>a_1, a_2, \dots</math> is a sequence of positive step sizes which serve the same function as in the Robbins-Monro algorithm, and <math>c_1, c_2, \dots</math> is a sequence of positive step sizes which are used to estimate, via finite differences, the derivative of <math>M</math>. [[Jack Kiefer (mathematician)|Kiefer]] and Wolfowitz showed that, if <math>a_n</math> and <math>c_n</math> satisfy various bounds (fulfilled by taking <math>a_n=1/n</math>, <math>c_n=(1/n)^{1/3}</math>), and <math>M(x)</math> and <math>N(x)</math> satisfy some technical conditions, then the sequence <math>x_n</math> converges in probability to <math>x_0</math>.
 
<math display="block">\theta_{n+1}=\theta_n - a_n(N(\theta_n) - \alpha)</math>
==Subsequent developments==
An extensive theoretical literature has grown up around these algorithms, concerning conditions for convergence, rates of convergence, multivariate and other generalizations, proper choice of step size, possible noise models, and so on.<ref name="kushneryin">''Stochastic Approximation Algorithms and Applications'', Harold J. Kushner and G. George Yin, New York: Springer-Verlag, 1997. ISBN 038794916X; 2nd ed., titled ''Stochastic Approximation and Recursive Algorithms and Applications'', 2003, ISBN 0387008942.</ref><ref>''Stochastic Approximation and Recursive Estimation'', Mikhail Borisovich Nevel'son and Rafail Zalmanovich Has'minskiĭ, translated by Israel Program for Scientific Translations and B. Silver, Providence, RI: American Mathematical Society, 1973, 1976. ISBN 0821815970.</ref> These methods are also applied in [[control theory]], in which case the unknown function which we wish to optimize or find the zero of may vary in time. In this case, the step size <math>a_n</math> should not converge to zero but should be chosen so as to track the function.<ref name="kushneryin"/><sup>, 2nd ed., chapter 3</sup>
 
Here, <math>a_1, a_2, \dots</math> is a sequence of positive step sizes. [[Herbert Robbins|Robbins]] and Monro proved<ref name="rm" /><sup>, Theorem 2</sup> that <math>\theta_n</math> [[convergence of random variables|converges]] in <math>L^2</math> (and hence also in probability) to <math>\theta^*</math>, and Blum<ref name=":0">{{Cite journal|last=Blum|first=Julius R.|date=1954-06-01|title=Approximation Methods which Converge with Probability one|journal=The Annals of Mathematical Statistics|language=EN|volume=25|issue=2|pages=382–386|doi=10.1214/aoms/1177728794|issn=0003-4851|doi-access=free}}</ref> later proved the convergence is actually with probability one, provided that:
[[C. Johan Masreliez]] and [[R. Douglas Martin]] were the first to use
* <math display="inline">N(\theta)</math> is uniformly bounded,
stochastic approximation in 1975 when dealing with [[Robust statistics|robust]] [[estimation]].<ref>R.D. Martin & C.J. Masreliez, ''Robust estimation via stochastic approximation''. IEEE Trans. Inform. Theory, 21(pp.263—271) (1975).</ref>
* <math display="inline">M(\theta)</math> is nondecreasing,
* <math display="inline">M'(\theta^*)</math> exists and is positive, and
* The sequence <math display="inline">a_n</math> satisfies the following requirements:
 
<math display="block">\qquad \sum^{\infty}_{n=0}a_n = \infty \quad \mbox{ and } \quad \sum^{\infty}_{n=0}a^2_n < \infty \quad </math>
A particular sequence of steps which satisfy these conditions, and was suggested by Robbins–Monro, have the form: <math display="inline">a_n=a/n</math>, for <math display="inline"> a > 0 </math>. Other series, such as <math>a_n = \frac{1}{n \ln n}, \frac{1}{n \ln n \ln\ln n}, \dots</math> are possible but in order to average out the noise in <math display="inline">N(\theta)</math>, the above condition must be met.
 
=== Example ===
Consider the problem of estimating the mean <math>\theta^*</math> of a probability distribution from a stream of independent samples <math>X_1, X_2, \dots</math>.
 
Let <math>N(\theta) := \theta - X</math>, then the unique solution to <math display="inline">\operatorname E[N(\theta)] = 0</math> is the desired mean <math>\theta^*</math>. The RM algorithm gives us<math display="block">\theta_{n+1}=\theta_n - a_n(\theta_n - X_n) </math>This is equivalent to [[stochastic gradient descent]] with loss function <math>L(\theta) = \frac 12 \|X - \theta\|^2 </math>. It is also equivalent to a weighted average:<math display="block">\theta_{n+1}=(1-a_n)\theta_n + a_n X_n </math>In general, if there exists some function <math>L</math> such that <math>\nabla L(\theta) = N(\theta) - \alpha </math>, then the Robbins–Monro algorithm is equivalent to stochastic gradient descent with loss function <math>L(\theta)</math>. However, the RM algorithm does not require <math>L</math> to exist in order to converge.
 
===Complexity results===
#If <math display="inline">f(\theta)</math> is twice continuously differentiable, and strongly convex, and the minimizer of <math display="inline">f(\theta)</math> belongs to the interior of <math display="inline">\Theta</math>, then the Robbins–Monro algorithm will achieve the asymptotically optimal convergence rate, with respect to the objective function, being <math display="inline">\operatorname E[f(\theta_n) - f^*] = O(1/n)</math>, where <math display="inline">f^*</math> is the minimal value of <math display="inline">f(\theta)</math> over <math display="inline">\theta \in \Theta</math>.<ref name="jsacks">{{Cite journal | last1 = Sacks | first1 = J. | title = Asymptotic Distribution of Stochastic Approximation Procedures | doi = 10.1214/aoms/1177706619 | journal = The Annals of Mathematical Statistics | volume = 29 | issue = 2 | pages = 373–405 | year = 1958 | jstor = 2237335| doi-access = free }}</ref><ref name="NJLS">{{Cite journal | last1 = Nemirovski | first1 = A. | author-link1 = Arkadi Nemirovski| last2 = Juditsky | first2 = A. | last3 = Lan | first3 = G. | last4 = Shapiro | first4 = A. | title = Robust Stochastic Approximation Approach to Stochastic Programming | doi = 10.1137/070704277 | journal = SIAM Journal on Optimization | volume = 19 | issue = 4 | pages = 1574 | year = 2009 }}</ref>
# Conversely, in the general convex case, where we lack both the assumption of smoothness and strong convexity, Nemirovski and Yudin<ref name="NYcomp">Problem Complexity and Method Efficiency in Optimization, A. Nemirovski and D. Yudin, ''Wiley -Intersci. Ser. Discrete Math'' '''15''' ''John Wiley'' ''New York'' (1983) .</ref> have shown that the asymptotically optimal convergence rate, with respect to the objective function values, is <math display="inline">O(1/\sqrt{n})</math>. They have also proven that this rate cannot be improved.
 
===Subsequent developments and Polyak–Ruppert averaging===
While the Robbins–Monro algorithm is theoretically able to achieve <math display="inline"> O(1/n)</math> under the assumption of twice continuous differentiability and strong convexity, it can perform quite poorly upon implementation. This is primarily due to the fact that the algorithm is very sensitive to the choice of the step size sequence, and the supposed asymptotically optimal step size policy can be quite harmful in the beginning.<ref name="NJLS" /><ref name="jcsbook">[https://books.google.com/books?id=f66OIvvkKnAC&q=%22Robbins-Monro%22 Introduction to Stochastic Search and Optimization: Estimation, Simulation and Control], J.C. Spall, ''John Wiley'' ''Hoboken, NJ'', (2003).</ref>
 
Chung (1954)<ref>{{Cite journal|last=Chung|first=K. L.|date=1954-09-01|title=On a Stochastic Approximation Method|journal=The Annals of Mathematical Statistics|language=EN|volume=25|issue=3|pages=463–483|doi=10.1214/aoms/1177728716|issn=0003-4851|doi-access=free}}</ref> and Fabian (1968)<ref>{{Cite journal|last=Fabian|first=Vaclav|date=1968-08-01|title=On Asymptotic Normality in Stochastic Approximation|journal=The Annals of Mathematical Statistics|language=EN|volume=39|issue=4|pages=1327–1332|doi=10.1214/aoms/1177698258|issn=0003-4851|doi-access=free}}</ref> showed that we would achieve optimal convergence rate <math display="inline">O(1/\sqrt{n})</math> with <math display="inline">a_n=\bigtriangledown^2f(\theta^*)^{-1}/n</math> (or <math display="inline">a_n=\frac{1}{(nM'(\theta^*))}</math>). Lai and Robbins<ref>{{Cite journal|last1=Lai|first1=T. L.|last2=Robbins|first2=Herbert|date=1979-11-01|title=Adaptive Design and Stochastic Approximation|journal=The Annals of Statistics|language=EN|volume=7|issue=6|pages=1196–1221|doi=10.1214/aos/1176344840|issn=0090-5364|doi-access=free}}</ref><ref>{{Cite journal|last1=Lai|first1=Tze Leung|last2=Robbins|first2=Herbert|date=1981-09-01|title=Consistency and asymptotic efficiency of slope estimates in stochastic approximation schemes|journal=Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete|language=en|volume=56|issue=3|pages=329–360|doi=10.1007/BF00536178|s2cid=122109044|issn=0044-3719|doi-access=free}}</ref> designed adaptive procedures to estimate <math display="inline">M'(\theta^*)</math> such that <math display="inline">\theta_n</math> has minimal asymptotic variance. However the application of such optimal methods requires much a priori information which is hard to obtain in most situations. To overcome this shortfall, Polyak (1991)<ref>{{Cite journal|last=Polyak|first=B T|date=1991|title=New stochastic approximation type procedures. (In Russian.)|url=https://www.researchgate.net/publication/236736759|journal=Automation and Remote Control|volume=7|issue=7}}</ref> and Ruppert (1988)<ref>{{Cite report|last=Ruppert|first=David|title=Efficient estimators from a slowly converging robbins-monro process|url=https://www.researchgate.net/publication/242608650|type=Technical Report 781|publisher=Cornell University School of Operations Research and Industrial Engineering|year=1988}}</ref> independently developed a new optimal algorithm based on the idea of averaging the trajectories. Polyak and Juditsky<ref name="pj">{{Cite journal | last1 = Polyak | first1 = B. T. | last2 = Juditsky | first2 = A. B. | doi = 10.1137/0330046 | title = Acceleration of Stochastic Approximation by Averaging | journal = SIAM Journal on Control and Optimization | volume = 30 | issue = 4 | pages = 838 | year = 1992 }}</ref> also presented a method of accelerating Robbins–Monro for linear and non-linear root-searching problems through the use of longer steps, and averaging of the iterates. The algorithm would have the following structure:<math display="block"> \theta_{n+1} - \theta_n = a_n(\alpha - N(\theta_n)), \qquad \bar{\theta}_n = \frac{1}{n} \sum^{n-1}_{i=0} \theta_i </math>The convergence of <math> \bar{\theta}_n </math> to the unique root <math>\theta^*</math> relies on the condition that the step sequence <math>\{a_n\}</math> decreases sufficiently slowly. That is
 
'''''A1)''''' <math display="block"> a_n \rightarrow 0, \qquad \frac{a_n - a_{n+1}}{a_n} = o(a_n)</math>
 
Therefore, the sequence <math display="inline">a_n = n^{-\alpha}</math> with <math display="inline">0 < \alpha < 1</math> satisfies this restriction, but <math display="inline">\alpha = 1</math> does not, hence the longer steps. Under the assumptions outlined in the Robbins–Monro algorithm, the resulting modification will result in the same asymptotically optimal convergence rate <math display="inline">O(1/\sqrt{n})</math> yet with a more robust step size policy.<ref name="pj" /> Prior to this, the idea of using longer steps and averaging the iterates had already been proposed by Nemirovski and Yudin<ref name="NY">On Cezari's convergence of the steepest descent method for approximating saddle points of convex-concave functions, A. Nemirovski and D. Yudin, ''Dokl. Akad. Nauk SSR'' '''2939''', (1978 (Russian)), Soviet Math. Dokl. '''19''' (1978 (English)).</ref> for the cases of solving the stochastic optimization problem with continuous convex objectives and for convex-concave saddle point problems. These algorithms were observed to attain the nonasymptotic rate <math display="inline">O(1/\sqrt{n})</math>.
 
A more general result is given in Chapter 11 of Kushner and Yin<ref>{{Cite book|url=https://www.springer.com/us/book/9780387008943|title=Stochastic Approximation and Recursive Algorithms and {{!}} Harold Kushner {{!}} Springer|publisher=www.springer.com|access-date=2016-05-16|isbn=9780387008943|last1=Kushner|first1=Harold|last2=George Yin|first2=G.|date=2003-07-17}}</ref> by defining interpolated time <math display="inline">t_n=\sum_{i=0}^{n-1}a_i</math>, interpolated process <math display="inline">\theta^n(\cdot)</math> and interpolated normalized process <math display="inline">U^n(\cdot)</math> as
 
<math display="block">\theta^n(t)=\theta_{n+i},\quad U^n(t)=(\theta_{n+i}-\theta^*)/\sqrt{a_{n+i}}\quad\mbox{for}\quad t\in[t_{n+i}-t_n,t_{n+i+1}-t_n),i\ge0</math>Let the iterate average be <math>\Theta_n=\frac{a_n}{t}\sum_{i=n}^{n+t/a_n-1}\theta_i</math> and the associate normalized error to be <math>\hat{U}^n(t)=\frac{\sqrt{a_n}}{t}\sum_{i=n}^{n+t/a_n-1}(\theta_i-\theta^*)</math>.
 
With assumption '''A1)''' and the following '''A2)'''
 
'''''A2)''''' ''There is a Hurwitz matrix <math display="inline">A</math> and a symmetric and positive-definite matrix <math display="inline">\Sigma</math> such that <math display="inline">\{U^n(\cdot)\}</math> converges weakly to <math display="inline">U(\cdot)</math>, where <math display="inline">U(\cdot)</math> is the statisolution to <math display="block">dU = AU \, dt +\Sigma^{1/2} \, dw</math>where <math display="inline">w(\cdot)</math> is a standard Wiener process.''
 
satisfied, and define ''<math display="inline">\bar{V}=(A^{-1})'\Sigma(A')^{-1}</math>''. Then for each ''<math display="inline">t</math>'',
 
''<math display="block">\hat{U}^n(t)\stackrel{\mathcal{D}}{\longrightarrow}\mathcal{N}(0,V_t),\quad \text{where}\quad V_t=\bar{V}/t+O(1/t^2).</math>''
 
The success of the averaging idea is because of the time scale separation of the original sequence ''<math display="inline">\{\theta_n\}</math>'' and the averaged sequence ''<math display="inline">\{\Theta_n\}</math>'', with the time scale of the former one being faster.
 
=== Application in stochastic optimization ===
Suppose we want to solve the following stochastic optimization problem<math display="block">g(\theta^*) = \min_{\theta\in\Theta}\operatorname{E}[Q(\theta,X)],</math>where <math display="inline">g(\theta) = \operatorname{E}[Q(\theta,X)]</math> is differentiable and convex, then this problem is equivalent to find the root <math>\theta^*</math> of <math>\nabla g(\theta) = 0</math>. Here <math>Q(\theta,X)</math> can be interpreted as some "observed" cost as a function of the chosen <math>\theta</math> and random effects <math>X</math>. In practice, it might be hard to get an analytical form of <math>\nabla g(\theta)</math>, Robbins–Monro method manages to generate a sequence <math>(\theta_n)_{n\geq 0}</math> to approximate <math>\theta^*</math> if one can generate <math>(X_n)_{n\geq 0}
</math> , in which the conditional expectation of <math>X_n
 
</math> given <math>\theta_n
</math> is exactly <math>\nabla g(\theta_n)</math>, i.e. <math>X_n</math> is simulated from a conditional distribution defined by
 
<math display="block">\operatorname{E}[H(\theta,X)|\theta = \theta_n] = \nabla g(\theta_n).</math>
 
Here <math>H(\theta, X)</math> is an unbiased estimator of <math>\nabla g(\theta)</math>. If <math>X</math> depends on <math>\theta</math>, there is in general no natural way of generating a random outcome <math>H(\theta, X)</math> that is an unbiased estimator of the gradient. In some special cases when either IPA or likelihood ratio methods are applicable, then one is able to obtain an unbiased gradient estimator <math>H(\theta, X)</math>. If <math>X</math> is viewed as some "fundamental" underlying random process that is generated ''independently'' of <math>\theta</math>, and under some regularization conditions for derivative-integral interchange operations so that <math>\operatorname{E}\Big[\frac{\partial}{\partial\theta}Q(\theta,X)\Big] = \nabla g(\theta)</math>, then <math>H(\theta, X) = \frac{\partial}{\partial \theta}Q(\theta, X)</math> gives the fundamental gradient unbiased estimate. However, for some applications we have to use finite-difference methods in which <math>H(\theta, X)</math> has a conditional expectation close to <math>\nabla g(\theta)</math> but not exactly equal to it.
 
We then define a recursion analogously to [[Newton's Method]] in the deterministic algorithm:
 
: <math display="block">\theta_{n+1} = \theta_n - \varepsilon_n H(\theta_n,X_{n+1}).</math>
 
==== Convergence of the algorithm ====
The following result gives sufficient conditions on <math>\theta_n
 
</math> for the algorithm to converge:<ref>{{Cite book|title=Numerical Methods for stochastic Processes|last1=Bouleau|first1=N.|author-link=Nicolas Bouleau|last2=Lepingle|first2=D.|publisher=John Wiley|year=1994|isbn=9780471546412|___location=New York|url=https://books.google.com/books?id=9MLL2RN40asC}}</ref>
 
C1) <math>\varepsilon_n \geq 0, \forall\; n\geq 0. </math>
 
C2) <math>\sum_{n=0}^\infty \varepsilon_n = \infty </math>
 
C3) <math>\sum_{n=0}^{\infty}\varepsilon_n^2 <\infty </math>
 
C4) <math>|X_n| \leq B, \text{ for a fixed bound } B. </math>
 
C5) <math>g(\theta) \text{ is strictly convex, i.e.} </math>
 
: <math display="block">\inf_{\delta\leq |\theta - \theta^*|\leq 1/\delta}\langle\theta-\theta^*, \nabla g(\theta)\rangle > 0,\text{ for every } 0< \delta < 1.
</math>
 
Then <math>\theta_n </math> converges to <math>\theta^* </math> almost surely.
 
Here are some intuitive explanations about these conditions. Suppose <math>H(\theta_n, X_{n+1})</math> is a uniformly bounded random variables. If C2) is not satisfied, i.e. <math>\sum_{n=0}^\infty \varepsilon_n < \infty
</math> , then<math display="block">\theta_n - \theta_0 = -\sum_{i=0}^{n-1} \varepsilon_i H(\theta_i, X_{i+1})
</math>is a bounded sequence, so the iteration cannot converge to <math>\theta^* </math> if the initial guess <math>\theta_0
</math> is too far away from <math>\theta^* </math>. As for C3) note that if <math>\theta_n </math> converges to <math>\theta^* </math> then
 
<math display="block">\theta_{n+1} - \theta_n = -\varepsilon_n H(\theta_n, X_{n+1}) \rightarrow 0, \text{ as } n\rightarrow \infty.</math> so we must have <math>\varepsilon_n \downarrow 0 </math> ,and the condition C3) ensures it. A natural choice would be <math>\varepsilon_n = 1/n </math>. Condition C5) is a fairly stringent condition on the shape of <math>g(\theta)</math>; it gives the search direction of the algorithm.
 
==== Example (where the stochastic gradient method is appropriate)<ref name="jcsbook" /> ====
Suppose <math>Q(\theta, X) = f(\theta) + \theta^T X</math>, where <math>f</math> is differentiable and <math>X\in \mathbb{R}^p</math> is a random variable independent of <math>\theta</math>. Then <math>g(\theta)=\operatorname{E}[Q(\theta,X)] = f(\theta)+\theta^T\operatorname{E}X</math> depends on the mean of <math>X</math>, and the stochastic gradient method would be appropriate in this problem. We can choose <math>H(\theta, X) = \frac{\partial}{\partial\theta}Q(\theta,X) = \frac{\partial}{\partial\theta}f(\theta) + X.</math>
 
==Kiefer–Wolfowitz algorithm==
The Kiefer–Wolfowitz algorithm was introduced in 1952 by [[Jacob Wolfowitz]] and [[Jack_Kiefer_(statistician)|Jack Kiefer]],<ref name = "KW">{{Cite journal | last1 = Kiefer | first1 = J. | last2 = Wolfowitz | first2 = J. | doi = 10.1214/aoms/1177729392 | title = Stochastic Estimation of the Maximum of a Regression Function | journal = The Annals of Mathematical Statistics | volume = 23 | issue = 3 | pages = 462 | year = 1952 | doi-access = free }}</ref> and was motivated by the publication of the Robbins–Monro algorithm. However, the algorithm was presented as a method which would stochastically estimate the maximum of a function.
 
Let <math>M(x) </math> be a function which has a maximum at the point <math>\theta </math>. It is assumed that <math>M(x)</math> is unknown; however, certain observations <math>N(x)</math>, where <math>\operatorname E[N(x)] = M(x)</math>, can be made at any point <math>x</math>. The structure of the algorithm follows a gradient-like method, with the iterates being generated as
::<math> x_{n+1} = x_n + a_n\cdot\left(\frac{N(x_n + c_n) - N(x_n -c_n)}{2 c_n} \right) </math>
where <math>N(x_n+c_n)</math> and <math>N(x_n-c_n)</math> are independent. At every step, the gradient of <math>M(x)</math> is approximated akin to a [[Finite difference#Basic types|central difference method]] with <math>h=2c_n</math>. So the sequence <math>\{c_n\}</math> specifies the sequence of finite difference widths used for the gradient approximation, while the sequence <math>\{a_n\}</math> specifies a sequence of positive step sizes taken along that direction.
 
Kiefer and Wolfowitz proved that, if <math>M(x)</math> satisfied certain regularity conditions, then <math>x_n</math> will converge to <math>\theta</math> in probability as <math>n\to\infty
</math>, and later Blum<ref name=":0" /> in 1954 showed <math>x_n</math> converges to <math>\theta</math> almost surely, provided that:
* <math>\operatorname{Var}(N(x))\le S<\infty</math> for all <math>x</math>.
* The function <math>M(x)</math> has a unique point of maximum (minimum) and is strong concave (convex)
** The algorithm was first presented with the requirement that the function <math>M(\cdot)</math> maintains strong global convexity (concavity) over the entire feasible space. Given this condition is too restrictive to impose over the entire ___domain, Kiefer and Wolfowitz proposed that it is sufficient to impose the condition over a compact set <math>C_0 \subset \mathbb R^d</math> which is known to include the optimal solution.
*The function <math>M(x)</math> satisfies the regularity conditions as follows:
** There exists <math>\beta>0</math> and <math>B>0</math> such that <math display="block">|x'-\theta|+|x''-\theta|<\beta \quad \Longrightarrow \quad |M(x')-M(x'')|<B|x'-x''|</math>
** There exists <math> \rho>0 </math> and <math> R>0 </math> such that <math display="block">|x'-x''|<\rho \quad \Longrightarrow \quad |M(x')-M(x'')|<R</math>
** For every <math> \delta>0 </math>, there exists some <math> \pi(\delta)>0 </math> such that <math display="block">|z-\theta|>\delta \quad \Longrightarrow \quad \inf_{\delta/2>\varepsilon>0}\frac{|M(z+\varepsilon)-M(z-\varepsilon)|}{\varepsilon}>\pi(\delta)</math>
*The selected sequences <math>\{a_n\}</math> and <math>\{c_n\}</math> must be infinite sequences of positive numbers such that
**<math>\quad c_n \rightarrow 0\quad \text{as}\quad n\to\infty </math>
**<math> \sum^\infty_{n=0} a_n =\infty </math>
**<math> \sum^\infty_{n=0} a_nc_n <\infty </math>
**<math> \sum^\infty_{n=0} a_n^2c_n^{-2} <\infty </math>
 
A suitable choice of sequences, as recommended by Kiefer and Wolfowitz, would be <math>a_n = 1/n</math> and <math>c_n = n^{-1/3}</math>.
 
===Subsequent developments and important issues===
#The Kiefer Wolfowitz algorithm requires that for each gradient computation, at least <math>d+1</math> different parameter values must be simulated for every iteration of the algorithm, where <math>d </math> is the dimension of the search space. This means that when <math>d</math> is large, the Kiefer–Wolfowitz algorithm will require substantial computational effort per iteration, leading to slow convergence.
## To address this problem, Spall proposed the use of [[Simultaneous perturbation stochastic approximation|simultaneous perturbations]] to estimate the gradient. This method would require only two simulations per iteration, regardless of the dimension <math>d</math>.<ref name = "Jsp">{{Cite journal | last1 = Spall | first1 = J. C. | title = Adaptive stochastic approximation by the simultaneous perturbation method | doi = 10.1109/TAC.2000.880982 | journal = IEEE Transactions on Automatic Control | volume = 45 | issue = 10 | pages = 1839–1853 | year = 2000 }}</ref>
#In the conditions required for convergence, the ability to specify a predetermined compact set that fulfills strong convexity (or concavity) and contains the unique solution can be difficult to find. With respect to real world applications, if the ___domain is quite large, these assumptions can be fairly restrictive and highly unrealistic.
 
==Further developments==
An extensive theoretical literature has grown up around these algorithms, concerning conditions for convergence, rates of convergence, multivariate and other generalizations, proper choice of step size, possible noise models, and so on.<ref name="kushneryin">{{Cite book | last1 = Kushner | first1 = H. J. | author-link=Harold J. Kushner| last2 = Yin | first2 = G. G. | doi = 10.1007/978-1-4899-2696-8 | title = Stochastic Approximation Algorithms and Applications | year = 1997 | isbn = 978-1-4899-2698-2 }}</ref><ref>''Stochastic Approximation and Recursive Estimation'', Mikhail Borisovich Nevel'son and Rafail Zalmanovich Has'minskiĭ, translated by Israel Program for Scientific Translations and B. Silver, Providence, RI: American Mathematical Society, 1973, 1976. {{isbn|0-8218-1597-0}}.</ref> These methods are also applied in [[control theory]], in which case the unknown function which we wish to optimize or find the zero of may vary in time. In this case, the step size <math>a_n</math> should not converge to zero but should be chosen so as to track the function.<ref name="kushneryin"/><sup>, 2nd ed., chapter 3</sup>
 
[[C. Johan Masreliez]] and [[R. Douglas Martin]] were the first to apply
stochastic approximation to [[Robust statistics|robust]] [[estimation]].<ref>{{Cite journal | last1 = Martin | first1 = R. | last2 = Masreliez | first2 = C. | doi = 10.1109/TIT.1975.1055386 | title = Robust estimation via stochastic approximation | journal = IEEE Transactions on Information Theory | volume = 21 | issue = 3 | pages = 263 | year = 1975 }}</ref>
 
The main tool for analyzing stochastic approximations algorithms (including the Robbins–Monro and the Kiefer–Wolfowitz algorithms) is a theorem by [[Aryeh Dvoretzky]] published in 1956.<ref>{{cite conference
| last = Dvoretzky | first = Aryeh | author-link = Aryeh Dvoretzky
| editor-last = Neyman | editor-first = Jerzy | editor-link = Jerzy Neyman
| contribution = On stochastic approximation
| contribution-url = https://projecteuclid.org/euclid.bsmsp/1200501645
| mr = 84911
| pages = 39–55
| publisher = University of California Press
| title = Proceedings of the Third Berkeley Symposium on Mathematical Statistics and Probability, 1954–1955, vol. I
| year = 1956}}</ref>
 
==See also==
*[[Stochastic gradient descent]]
*[[Stochastic optimizationvariance reduction]]
*[[Simultaneous perturbation stochastic approximation]]
 
==References==
{{reflist}}
 
{{Statistics|collection}}
 
{{DEFAULTSORT:Stochastic Approximation}}
[[Category:Stochastic optimization]]
[[Category:Statistical approximations]]