Subgradient method: Difference between revisions

Content deleted Content added
m Enum 1 author/editor WL; WP:GenFixes on
Line 1:
'''Subgradient methods''' are [[iterative method]]s for solving [[convex optimization|convex minimization]] problems. Originally developed by [[Naum Z. Shor]] and others in the 1960s and 1970s, subgradient methods are convergent when applied even to a non-differentiable objective function. When the objective function is differentiable, sub-gradient methods for unconstrained problems use the same search direction as the method of [[gradient descent|steepest descent]].
 
Subgradient methods are slower than Newton's method when applied to minimize twice continuously differentiable convex functions. However, Newton's method fails to converge on problems that have non-differentiable kinks.
 
In recent years, some [[interior-point methods]] have been suggested for convex minimization problems, but subgradient projection methods and related bundle methods of descent remain competitive. For convex minimization problems with very large number of dimensions, subgradient-projection methods are suitable, because they require little storage.
 
Subgradient projection methods are often applied to large-scale problems with decomposition techniques. Such decomposition methods often allow a simple distributed method for a problem.
Line 32:
| last = Bertsekas
| first = Dimitri P.
| authorlinkauthor-link = Dimitri P. Bertsekas
| title = Convex Optimization Algorithms
| edition = Second
Line 58:
}}
</ref>
 
===Convergence results===
 
Line 65 ⟶ 66:
| last = Bertsekas
| first = Dimitri P.
| authorlinkauthor-link = Dimitri P. Bertsekas
| title = Nonlinear Programming
| edition = Second
Line 75 ⟶ 76:
| last = Shor
| first = Naum Z.
| authorlinkauthor-link = Naum Z. Shor
| title = Minimization Methods for Non-differentiable Functions
| publisher = [[Springer-Verlag]]
Line 89 ⟶ 90:
| last = Bertsekas
| first = Dimitri P.
| authorlinkauthor-link = Dimitri P. Bertsekas
| title = Nonlinear Programming
| edition = Second
Line 101 ⟶ 102:
{{cite book|last=Kiwiel|first=Krzysztof|title=Methods of Descent for Nondifferentiable Optimization|publisher=[[Springer Verlag]]|___location=Berlin|year=1985|pages=362|isbn=978-3540156420 |mr=0797754}}
</ref> Contemporary bundle-methods often use "[[level set|level]] control" rules for choosing step-sizes, developing techniques from the "subgradient-projection" method of Boris T. Polyak (1969). However, there are problems on which bundle methods offer little advantage over subgradient-projection methods.<ref name="Lem">
{{cite book| last=Lemaréchal|first=Claude|authorlinkauthor-link=Claude Lemaréchal|chapter=Lagrangian relaxation|pages=112–156|title=Computational combinatorial optimization: Papers from the Spring School held in Schloß Dagstuhl, May 15–19, 2000|editor=Michael Jünger and Denis Naddef|series=Lecture Notes in Computer Science|volume=2241|publisher=Springer-Verlag| ___location=Berlin|year=2001|isbn=3-540-42877-1|mr=1900016|doi=10.1007/3-540-45586-8_4|ref=harv}}</ref><ref name="KLL">
{{cite journal|last1=Kiwiel|first1=Krzysztof&nbsp;C.|last2=Larsson |first2=Torbjörn|last3=Lindberg|first3=P.&nbsp;O.|title=Lagrangian relaxation via ballstep subgradient methods|url=http://mor.journal.informs.org/cgi/content/abstract/32/3/669 |journal=Mathematics of Operations Research|volume=32|date=August 2007|number=3|pages=669–686|mr=2348241|doi=10.1287/moor.1070.0261|ref=harv}}
</ref>
Line 139 ⟶ 140:
 
==References==
{{Reflist}}
<references/>
 
==Further reading==
Line 145 ⟶ 146:
| last = Bertsekas
| first = Dimitri P.
| authorlinkauthor-link = Dimitri P. Bertsekas
| title = Nonlinear Programming
| publisher = Athena Scientific
Line 169 ⟶ 170:
| last = Bertsekas
| first = Dimitri P.
| authorlinkauthor-link = Dimitri P. Bertsekas
| title = Convex Optimization Algorithms
| publisher = Athena Scientific
Line 179 ⟶ 180:
| last = Shor
| first = Naum Z.
| authorlinkauthor-link = Naum Z. Shor
| title = Minimization Methods for Non-differentiable Functions
| publisher = [[Springer-Verlag]]
Line 186 ⟶ 187:
}}
 
* {{cite book|last=[[Ruszczyński|author-link=Andrzej Piotr Ruszczyński|Ruszczyński]]|first=Andrzej|title=Nonlinear Optimization|publisher=[[Princeton University Press]]|___location=Princeton, NJ|year=2006|pages=xii+454|isbn=978-0691119151 |mr=2199043}}
 
==External links==