Stochastic approximation: Difference between revisions

Content deleted Content added
m Fix lint errors
Line 33:
Chung (1954)<ref>{{Cite journal|last=Chung|first=K. L.|date=1954-09-01|title=On a Stochastic Approximation Method|journal=The Annals of Mathematical Statistics|language=EN|volume=25|issue=3|pages=463–483|doi=10.1214/aoms/1177728716|issn=0003-4851|doi-access=free}}</ref> and Fabian (1968)<ref>{{Cite journal|last=Fabian|first=Vaclav|date=1968-08-01|title=On Asymptotic Normality in Stochastic Approximation|journal=The Annals of Mathematical Statistics|language=EN|volume=39|issue=4|pages=1327–1332|doi=10.1214/aoms/1177698258|issn=0003-4851|doi-access=free}}</ref> showed that we would achieve optimal convergence rate <math display="inline">O(1/\sqrt{n})</math> with <math display="inline">a_n=\bigtriangledown^2f(\theta^*)^{-1}/n</math> (or <math display="inline">a_n=\frac{1}{(nM'(\theta^*))}</math>). Lai and Robbins<ref>{{Cite journal|last1=Lai|first1=T. L.|last2=Robbins|first2=Herbert|date=1979-11-01|title=Adaptive Design and Stochastic Approximation|journal=The Annals of Statistics|language=EN|volume=7|issue=6|pages=1196–1221|doi=10.1214/aos/1176344840|issn=0090-5364|doi-access=free}}</ref><ref>{{Cite journal|last1=Lai|first1=Tze Leung|last2=Robbins|first2=Herbert|date=1981-09-01|title=Consistency and asymptotic efficiency of slope estimates in stochastic approximation schemes|journal=Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete|language=en|volume=56|issue=3|pages=329–360|doi=10.1007/BF00536178|s2cid=122109044|issn=0044-3719|doi-access=free}}</ref> designed adaptive procedures to estimate <math display="inline">M'(\theta^*)</math> such that <math display="inline">\theta_n</math> has minimal asymptotic variance. However the application of such optimal methods requires much a priori information which is hard to obtain in most situations. To overcome this shortfall, Polyak (1991)<ref>{{Cite journal|last=Polyak|first=B T|date=1991|title=New stochastic approximation type procedures. (In Russian.)|url=https://www.researchgate.net/publication/236736759|journal=Automation and Remote Control|volume=7|issue=7}}</ref> and Ruppert (1988)<ref>{{Cite report|last=Ruppert|first=David|title=Efficient estimators from a slowly converging robbins-monro process|url=https://www.researchgate.net/publication/242608650|type=Technical Report 781|publisher=Cornell University School of Operations Research and Industrial Engineering|year=1988}}</ref> independently developed a new optimal algorithm based on the idea of averaging the trajectories. Polyak and Juditsky<ref name="pj">{{Cite journal | last1 = Polyak | first1 = B. T. | last2 = Juditsky | first2 = A. B. | doi = 10.1137/0330046 | title = Acceleration of Stochastic Approximation by Averaging | journal = SIAM Journal on Control and Optimization | volume = 30 | issue = 4 | pages = 838 | year = 1992 }}</ref> also presented a method of accelerating Robbins–Monro for linear and non-linear root-searching problems through the use of longer steps, and averaging of the iterates. The algorithm would have the following structure:<math display="block"> \theta_{n+1} - \theta_n = a_n(\alpha - N(\theta_n)), \qquad \bar{\theta}_n = \frac{1}{n} \sum^{n-1}_{i=0} \theta_i </math>The convergence of <math> \bar{\theta}_n </math> to the unique root <math>\theta^*</math> relies on the condition that the step sequence <math>\{a_n\}</math> decreases sufficiently slowly. That is
 
'''''A1)''''' ''<math display="block"> a_n \rightarrow 0, \qquad \frac{a_n - a_{n+1}}{a_n} = o(a_n)</math>
 
Therefore, the sequence <math display="inline">a_n = n^{-\alpha}</math> with <math display="inline">0 < \alpha < 1</math> satisfies this restriction, but <math display="inline">\alpha = 1</math> does not, hence the longer steps. Under the assumptions outlined in the Robbins–Monro algorithm, the resulting modification will result in the same asymptotically optimal convergence rate <math display="inline">O(1/\sqrt{n})</math> yet with a more robust step size policy.<ref name="pj" /> Prior to this, the idea of using longer steps and averaging the iterates had already been proposed by Nemirovski and Yudin<ref name="NY">On Cezari's convergence of the steepest descent method for approximating saddle points of convex-concave functions, A. Nemirovski and D. Yudin, ''Dokl. Akad. Nauk SSR'' '''2939''', (1978 (Russian)), Soviet Math. Dokl. '''19''' (1978 (English)).</ref> for the cases of solving the stochastic optimization problem with continuous convex objectives and for convex-concave saddle point problems. These algorithms were observed to attain the nonasymptotic rate <math display="inline">O(1/\sqrt{n})</math>.
Line 43:
With assumption '''A1)''' and the following '''A2)'''
 
'''''A2)''''' ''There is a Hurwitz matrix <math display="inline">A</math> and a symmetric and positive-definite matrix <math display="inline">\Sigma</math> such that <math display="inline">\{U^n(\cdot)\}</math> converges weakly to <math display="inline">U(\cdot)</math>, where <math display="inline">U(\cdot)</math> is the statisolution to'' <math display="block">dU = AU \, dt +\Sigma^{1/2} \, dw</math>where <math display="inline">w(\cdot)</math> is a standard Wiener process.''
 
satisfied, and define ''<math display="inline">\bar{V}=(A^{-1})'\Sigma(A')^{-1}</math>''. Then for each ''<math display="inline">t</math>'',