Content deleted Content added
No edit summary |
No edit summary |
||
Line 21:
For constant step size and constant step length, the subgradient algorithm is guaranteed to converge to within some range of the optimal value, i.e.
:<math>\lim_{k\to\infty} f_{\rm{best}}^{(k)} - f^* <\epsilon</math>
==Constrained optimization==
===Projected subgradient===
One extension of the subgradient method is the '''projected subgradient method''', which solves the constrained optimization problem
:minimize <math>f(x) \ </math> subject to
:<math>x\in\mathcal{C}</math>
where <math>\mathcal{C}</math> is a convex set. The projected subgradient method uses the iteration
:<math>x^{(k+1)} = P \left(x^{(k)} - \alpha_k g^{(k)} \right) </math>
where <math>P</math> is projection on <math>\mathcal{C}</math> and <math>g^{(k)}</math> is any subgradient of <math>f \ </math> at <math>x^{(k)}</math>.
===General constraints===
The subgradient method can be extended to solve the inequality constrained problem
:minimize <math>f_0(x) \ </math> subject to
:<math>f_i (x) \leq 0,\quad i = 1,\dots,m</math>
where <math>f_i</math> are convex. The algorithm takes the same form as the unconstrained case
:<math>x^{(k+1)} = x^{(k)} - \alpha_k g^{(k)} \ </math>
where <math>\alpha_k>0</math> is a step size, and <math>g^{(k)}</math> is a subgradient of the objective or one of the constraint functions at <math>x \ </math>. Take
:<math>g^{(k)} =
\begin{cases}
\partial f_0 (x) & f_i(x) \leq 0,\quad i = 1,\dots,m \\
\partial f_j (x) & f_j(x) > 0
\end{cases}</math>
where <math>\partial f</math> denotes the subdifferential of <math>f \ </math>. If the current point is feasible, the algorithm uses an objective subgradient; if the current point is infeasible, the algorithm chooses a subgradient of any violated constraint.
[[Category: Mathematics_stubs]]
|