Subgradient method: Difference between revisions

Content deleted Content added
Gblikas (talk | contribs)
Gblikas (talk | contribs)
Line 11:
Let <math>f:\mathbb{R}^n \to \mathbb{R}</math> be a [[convex function]] with ___domain <math>\mathbb{R}^n</math>. A classical subgradient method iterates
:<math>x^{(k+1)} = x^{(k)} - \alpha_k g^{(k)} \ </math>
where <math>g^{(k)}</math> denotes a [[subgradient]] of <math> f \ </math> at <math>x^{(k)} \ </math>, and <math>x^{(x)}</math> is the <math>k^{th}</math> iterate of <math>x</math>. If <math>f \ </math> is differentiable, then its only subgradient is the gradient vector <math>\nabla f</math> itself.
It may happen that <math>-g^{(k)}</math> is not a descent direction for <math>f \ </math> at <math>x^{(k)}</math>. We therefore maintain a list <math>f_{\rm{best}} \ </math> that keeps track of the lowest objective function value found so far, i.e.
:<math>f_{\rm{best}}^{(k)} = \min\{f_{\rm{best}}^{(k-1)} , f(x^{(k)}) \}.</math>