Convex optimization: Difference between revisions

Content deleted Content added
Unify some references
Line 1:
{{short description|Subfield of mathematical optimization}}
'''Convex optimization''' is a subfield of [[mathematical optimization]] that studies the problem of minimizing [[convex function]]s over [[convex set]]s (or, equivalently, maximizing [[concave functions]] over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms,<ref name="Nesterov 1994">{{harvnb|Nesterov|Nemirovskii|1994}}</ref> whereas mathematical optimization is in general [[NP-hard]].<ref>
{{cite journal | last1 = Murty | first1 = Katta | last2 = Kabadi | first2 = Santosh | title = Some NP-complete problems in quadratic and nonlinear programming | journal = Mathematical Programming | volume = 39 | issue = 2 | pages = 117–129 | year = 1987 | doi = 10.1007/BF02592948| hdl = 2027.42/6740 | s2cid = 30500771 | hdl-access = free}}</ref><ref>Sahni, S. "Computationally related problems," in SIAM Journal on Computing, 3, 262--279, 1974.</ref><ref>[https://link.springer.com/article/10.1007/BF00120662 Quadratic programming with one negative eigenvalue is NP-hard], Panos M. Pardalos and Stephen A. Vavasis in ''Journal of Global Optimization'', Volume 1, Number 1, 1991, pg.15-22.</ref> With recent advancements in computing and [[Mathematical optimization#Computational optimization techniques|optimization algorithms]], convex programming is nearly as straightforward as [[linear programming]].<ref name=":2">{{Cite book |last1=Boyd |first1=Stephen |url=https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf |title=Convex Optimization |last2=Vandenberghe |first2=Lieven |publisher=[[Cambridge University Press]] |year=2004 |isbn=978-0-521-83378-3 |access-date=12 Apr 2021}}</ref>{{Rp|page=8}}
 
==Definition==
Line 78:
 
==Algorithms==
 
Unconstrained convex optimization can be easily solved with [[gradient descent]] (a special case of [[Method of steepest descent|steepest descent]]) or [[Newton's method in optimization|Newton's method]], combined with [[line search]] for an appropriate step size; these can be mathematically proven to converge quickly, especially the latter method.<ref name=":2">{{Cite book|last1=Boyd|first1=Stephen|url=https://web.stanford.edu/~boyd/cvxbook/bv_cvxbook.pdf|title=Convex Optimization|last2=Vandenberghe|first2=Lieven|publisher=[[Cambridge University Press]]|year=2004|isbn=978-0-521-83378-3|access-date=12 Apr 2021}}</ref> Convex optimization with linear equality constraints can also be solved using [[Karush–Kuhn–Tucker conditions|KKT matrix]] techniques if the objective function is a [[quadratic function]] (which generalizes to a variation of Newton's method, which works even if the point of initialization does not satisfy the constraints), but can also generally be solved by eliminating the equality constraints with [[linear algebra]] or solving the dual problem.<ref name=":2" /> Finally, convex optimization with both linear equality constraints and convex inequality constraints can be solved by applying an unconstrained convex optimization technique to the objective function plus [[Logarithmic barrier function|logarithmic barrier]] terms.<ref name=":2" /> (When the starting point is not feasible - that is, satisfying the constraints - this is preceded by so-called ''phase I'' methods, which either find a feasible point or show that none exist. Phase I methods generally consist of reducing the search in question to yet another convex optimization problem.<ref name=":2" />)
=== Unconstrained and equality-constrained problems ===
The convex programs easiest to solve are the ''unconstrained'' problems, or the problems with only equality constraints. As the equality constraints are all linear, they can be eliminated with [[linear algebra]] and integrated into the objective, thus converting an equality-constrained problem into an unconstrained one.
 
In the class of unconstrained (or equality-constrained) problems, the simplest ones are those in which the objective is [[Quadratic programming|quadratic]]. For these problems, the [[KKT conditions]] (which are necessary for optimality) are all linear, so they can be solved analytically.<ref name=":2" />{{Rp|___location=chpt.11}}
 
For unconstrained (or equality-constrained) problems with a general convex objective, [[Newton's method in optimization|Newton's method]] can be used. It can be seen as reducing a general unconstrained convex problem, to a sequence of quadratic problems.<ref name=":2" />{{Rp|___location=chpt.11}}Newton's method can be combined with [[line search]] for an appropriate step size, and it can be mathematically proven to converge quickly.
 
Other efficient algorithms for unconstrained minimization are [[gradient descent]] (a special case of [[Method of steepest descent|steepest descent]]).
 
=== General problems ===
The more challenging problems are those with inequality constraints. A common way to solve them is to reduce them to unconstrained problems by adding a [[barrier function]], enforcing the inequality constraints, to the objective function. Such methods are called [[interior point methods]].<ref name=":2" />{{Rp|___location=chpt.11}}They have to be initialized by finding a feasible interior point using by so-called ''phase I'' methods, which either find a feasible point or show that none exist. Phase I methods generally consist of reducing the search in question to a simpler convex optimization problem.<ref name=":2" />{{Rp|___location=chpt.11}}
 
Convex optimization problems can also be solved by the following contemporary methods:<ref>For methods for convex minimization, see the volumes by Hiriart-Urruty and Lemaréchal (bundle) and the textbooks by [[Andrzej Piotr Ruszczyński|Ruszczyński]], [[Dimitri Bertsekas|Bertsekas]], and