Subgradient method: Difference between revisions

Content deleted Content added
John745 (talk | contribs)
"Iterative Methods" replaces erroneous "algorithms" (since these methods don't terminate finitely, on any substantial class of problems)
Line 1:
'''Subgradient methods''' are [[algorithm|algorithmsiterative method]]s for solving [[convex optimization]] problems. Originally developed by [[Naum Z. Shor]] and others in the 1960s and 1970s, subgradient methods can be used with a non-differentiable objective function. When the objective function is differentiable, subgradient methods for unconstrained problems use the same search direction as the method of [[gradient descent|steepest descent]].
 
Although subgradient methods can be much slower than [[interior-point methods]] and [[Newton's method in optimization|Newton's method]] in practice, they can be immediately applied to a far wider variety of problems and require much less memory. Moreover, by combining the subgradient method with primal or dual decomposition techniques, it is sometimes possible to develop a simple distributed algorithm for a problem.