Content deleted Content added
Adding short description: "Concept in convex optimization mathematics" |
updated outdated term Tags: Mobile edit Mobile app edit iOS app edit App section source |
||
(One intermediate revision by one other user not shown) | |||
Line 1:
{{Short description|Concept in convex optimization mathematics}}
'''Subgradient methods''' are [[convex optimization]] methods which use [[Subderivative|subderivatives]]. Originally developed by [[Naum Z. Shor]] and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of [[gradient descent|
Subgradient methods are slower than Newton's method when applied to minimize twice continuously differentiable convex functions. However, Newton's method fails to converge on problems that have non-differentiable kinks.
Line 95:
</ref> Contemporary bundle-methods often use "[[level set|level]] control" rules for choosing step-sizes, developing techniques from the "subgradient-projection" method of Boris T. Polyak (1969). However, there are problems on which bundle methods offer little advantage over subgradient-projection methods.<ref name="Lem">
{{cite book| last=Lemaréchal|first=Claude|author-link=Claude Lemaréchal|chapter=Lagrangian relaxation|pages=112–156|title=Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May 15–19, 2000|editor=Michael Jünger and Denis Naddef|series=Lecture Notes in Computer Science|volume=2241|publisher=Springer-Verlag| ___location=Berlin|year=2001|isbn=3-540-42877-1|mr=1900016|doi=10.1007/3-540-45586-8_4|s2cid=9048698 }}</ref><ref name="KLL">
{{cite journal|last1=Kiwiel|first1=Krzysztof C.|last2=Larsson |first2=Torbjörn|last3=Lindberg|first3=P. O.|title=Lagrangian relaxation via ballstep subgradient methods|url=http://
</ref>
|