Stochastic approximation: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
Line 1:
'''Stochastic approximation''' methods are a family of iterative [[stochastic optimization]] [[algorithm]]s that attempt to find zeroes or extrema of functions which cannot be computed directly, but only estimated via noisy observations. Mathematically, this refers to solving:
:<math> \min_{x \in \Theta}\; f(x) = \mathbb E[F(x,\xi)] </math>
where the objective is to find the parameter <math>x \in \Theta</math>, which minimizes the above function for some unknown random variable, <math>\xi </math>. It is assumed that while the ___domain <math>\Theta \subset \mathbb R^d </math> is known, where <math>d</math> is the dimension of the parameter <math>x </math>, <math>f(x)</math> cannot be computed exactly, but instead approximated byvia simulation.
 
The first, and prototypical, algorithms of this kind were the '''Robbins-Monro''' and '''Kiefer-Wolfowitz''' algorithms.