Stochastic approximation: Difference between revisions

Content deleted Content added
Application in stochastic optimization: Fuck<you8^£¢wilbe}stope(C¢mini:(/8965no)$!*∆¶√π÷¥€¢£~®©%✓™1234*
Tags: Reverted Mobile edit Mobile web edit
Reverting edit(s) by 2600:8801:1000:2400:C33:B551:606:EA34 (talk) to rev. 1087978198 by Felix QW: Disruptive editing (RW 16.1)
Line 54:
Suppose we want to solve the following stochastic optimization problem
 
<math display="block">g(delta (ta\theta^*) = \min_{\theta\in\Theta}\operatorname{E}[Q(\theta,X)],</math>where <math display="inline">g(\theta) = \operatorname{E}[Q(xnita\theta,X)]</math> is differentiable and convex, then this problem is equivalent to find the root <math>\theta^*</mah<apimath> of <math>\nabla g(\theta) = 0</math>. Here <math>Q(\theta,X)</math> can be interpreted as some "observed" cost as a function of the chosen <math>\theta</math> and random effects <math>X</math>. In practice, it might be hard to get an analytical form of <math>\nabla g(\theta)</math>, Robbins–Monro method manages to generate a sequence <math>(\theta_n)_{n\geq 0}</math> to approximate <math>\theta^*</math> if one can generate <math>(X_n)_{n\geq 0}
</math> , in which the conditional expectation of <math>X_n