Content deleted Content added
m linking |
|||
(22 intermediate revisions by 17 users not shown) | |||
Line 1:
{{short description|Subfield of mathematical optimization}}
'''Convex optimization''' is a subfield of [[mathematical optimization]] that studies the problem of minimizing [[convex function]]s over [[convex set]]s (or, equivalently, maximizing [[concave functions]] over convex sets). Many classes of convex optimization problems admit polynomial-time algorithms,<ref name="Nesterov 1994">{{harvnb|Nesterov|Nemirovskii|1994}}</ref> whereas mathematical optimization is in general [[NP-hard]].<ref>
{{cite journal | last1 = Murty | first1 = Katta | last2 = Kabadi | first2 = Santosh | title = Some NP-complete problems in quadratic and nonlinear programming | journal = Mathematical Programming | volume = 39 | issue = 2 | pages = 117–129 | year = 1987 | doi = 10.1007/BF02592948| hdl = 2027.42/6740 | s2cid = 30500771 | hdl-access = free}}</ref><ref>Sahni, S. "Computationally related problems," in SIAM Journal on Computing, 3, 262--279, 1974.</ref><ref>
==Definition==
=== Abstract form ===
A convex optimization problem is defined by two ingredients:<ref>{{cite book |last1=Hiriart-Urruty |first1=Jean-Baptiste |url=https://books.google.com/books?id=Gdl4Jc3RVjcC&q=lemarechal+convex+analysis+and+minimization |title=Convex analysis and minimization algorithms: Fundamentals |last2=Lemaréchal |first2=Claude |year=1996 |isbn=9783540568506 |page=291|publisher=Springer }}</ref><ref>{{cite book |last1=Ben-Tal |first1=Aharon |url=https://books.google.com/books?id=M3MqpEJ3jzQC&q=Lectures+on+Modern+Convex+Optimization:+Analysis,+Algorithms, |title=Lectures on modern convex optimization: analysis, algorithms, and engineering applications |last2=Nemirovskiĭ |first2=Arkadiĭ Semenovich |year=2001 |isbn=9780898714913 |pages=335–336}}</ref>
* The ''objective function'', which is a real-valued [[convex function]] of ''n'' variables, <math>f :\mathcal D \subseteq \mathbb{R}^n \to \mathbb{R}</math>
* The ''feasible set'', which is a [[convex subset]] <math>C\subseteq \mathbb{R}^n</math>.
The goal of the problem is to find some <math>\mathbf{x^\ast} \in C</math> attaining
Line 58:
& &x \in (b+L)\cap K
\end{align}</math>
where K is a closed [[Convex cone|pointed convex cone]], L is a [[linear subspace]] of R''<sup>n</sup>'', and b is a vector in R''<sup>n</sup>''. A linear program in standard form
=== Eliminating linear equality constraints ===
Line 72:
{{cite journal |last1=Agrawal |first1=Akshay |last2=Verschueren |first2=Robin |last3=Diamond |first3=Steven |last4=Boyd |first4=Stephen |year=2018 |title=A rewriting system for convex optimization problems |url=https://web.stanford.edu/~boyd/papers/pdf/cvxpy_rewriting.pdf |journal=Control and Decision |volume=5 |issue=1 |pages=42–60 |arxiv=1709.04494 |doi=10.1080/23307706.2017.1397554 |s2cid=67856259}}</ref>
[[File:Hierarchy compact convex.
*[[Linear programming]] problems are the simplest convex programs. In LP, the objective and constraint functions are all linear.
Line 84:
*[[Geometric programming]]
*[[Entropy maximization]] with appropriate constraints.
==Properties==
The following are useful properties of convex optimization problems:<ref name="rockafellar93">{{cite journal | author = Rockafellar, R. Tyrrell | title = Lagrange multipliers and optimality | journal = SIAM Review | volume = 35 | issue = 2 | year = 1993 | pages = 183–238 |url = http://web.williams.edu/Mathematics/sjmiller/public_html/105Sp10/handouts/Rockafellar_LagrangeMultAndOptimality.pdf | doi=10.1137/1035044| citeseerx = 10.1.1.161.7209}}</ref><ref name=":2" />{{Rp|___location=chpt.4}}
* every point that is [[local minimum]] is also a [[global minimum]];
* the optimal set is convex;
* if the objective function is ''strictly'' convex, then the problem has at most one optimal point.
Line 117 ⟶ 118:
*[[Subgradient method]]
*[[Drift plus penalty|Dual subgradients and the drift-plus-penalty method]]
Subgradient methods can be implemented simply and so are widely used.<ref>{{Cite journal |date=2006 |title=Numerical Optimization |url=https://link.springer.com/book/10.1007/978-0-387-40065-5 |journal=Springer Series in Operations Research and Financial Engineering |language=en |doi=10.1007/978-0-387-40065-5|isbn=978-0-387-30303-1 |url-access=subscription }}</ref> Dual subgradient methods are subgradient methods applied to a [[Duality (optimization)|dual problem]]. The [[Drift plus penalty|drift-plus-penalty]] method is similar to the dual subgradient method, but takes a time average of the primal variables.{{Citation needed|date=April 2021}}
== Lagrange multipliers ==
{{Main|Lagrange multiplier}}
{{Unreferenced section|date=April 2021}}
Line 126 ⟶ 128:
:<math>\mathcal{X} = \left\{x\in X \vert g_1(x), \ldots, g_m(x)\leq 0\right\}.</math>
The Lagrangian function for the problem is<ref>{{cite book |first1=Brian |last1=Beavis |first2=Ian M. |last2=Dobbs |chapter=Static Optimization |title=Optimization and Stability Theory for Economic Analysis |___location=New York |publisher=Cambridge University Press |year=1990 |isbn=0-521-33605-8 |page=40 |chapter-url=https://books.google.com/books?id=L7HMACFgnXMC&pg=PA40 }}</ref>
:<math>L(x,\lambda_{0},\lambda_1, \ldots ,\lambda_{m})=\lambda_{0} f(x) + \lambda_{1} g_{1} (x)+\cdots + \lambda_{m} g_{m} (x).</math>
Line 149 ⟶ 151:
Solvers implement the algorithms themselves and are usually written in C. They require users to specify optimization problems in very specific formats which may not be natural from a modeling perspective. Modeling tools are separate pieces of software that let the user specify an optimization in higher-level syntax. They manage all transformations to and from the user's high-level model and the solver's input/output format.
{| class="wikitable sortable"
|+
Line 161 ⟶ 163:
|[[MATLAB]]
|Interfaces with SeDuMi and SDPT3 solvers; designed to only express convex optimization problems.
|<ref name=":3">{{Cite web|last=Borchers|first=Brian|title=An Overview Of Software For Convex Optimization|url=http://infohost.nmt.edu/~borchers/presentation.pdf|url-status=dead|archive-url=https://web.archive.org/web/20170918180026/http://infohost.nmt.edu/~borchers/presentation.pdf|archive-date=2017-09-18|access-date=12 Apr 2021}}</ref>
|-▼
|[[Python (programming language)|Python]]▼
|<ref name=":3" />▼
|-
|CVXPY
|Python
|
!{{Yes}}
|▼
|<ref>{{Cite web|title=Welcome to CVXPY 1.1 — CVXPY 1.1.11 documentation|url=https://www.cvxpy.org/|access-date=2021-04-12|website=www.cvxpy.org}}</ref>
|-
Line 179 ⟶ 175:
|[[Julia (programming language)|Julia]]
|Disciplined convex programming, supports many solvers.
|<ref>{{cite arXiv |last1=Udell |first1=Madeleine |last2=Mohan |first2=Karanveer |last3=Zeng |first3=David |last4=Hong |first4=Jenny |last5=Diamond |first5=Steven |last6=Boyd |first6=Stephen |date=2014-10-17 |title=Convex Optimization in Julia |class=math.OC |eprint=1410.4821 }}</ref>
|-
Line 185 ⟶ 181:
|[[R (programming language)|R]]
|
|<ref>{{Cite web|title=Disciplined Convex Optimiation - CVXR|url=https://www.cvxgrp.org/CVXR/|access-date=2021-06-17|website=www.cvxgrp.org}}</ref>
|-
|GAMS▼
|YALMIP▼
|MATLAB, Octave▼
|Interfaces with CPLEX, GUROBI, MOSEK, SDPT3, SEDUMI, CSDP, SDPA, PENNON solvers; also supports integer and nonlinear optimization, and some nonconvex optimization. Can perform [[robust optimization]] with uncertainty in LP/SOCP/SDP constraints.▼
|<ref name=":3" />▼
|-▼
|-▼
|
|Modeling system for linear, nonlinear, mixed integer linear/nonlinear, and second-order cone programming problems.▼
!{{No}}
|<ref name=":3" />
|-
|MATLAB,
Octave▼
|Modeling system for polynomial optimization.▼
!{{Yes}}
|<ref name=":3" />
|-
|JuMP.jl
|AIMMS▼
|▼
|Supports many solvers. Also supports integer and nonlinear optimization, and some nonconvex optimization.
|Can do robust optimization on linear programming (with MOSEK to solve second-order cone programming) and [[mixed integer linear programming]]. Modeling package for LP + SDP and robust versions.▼
!{{Yes}}
|<ref>{{cite journal |last1=Lubin |first1=Miles |last2=Dowson |first2=Oscar |last3=Dias Garcia |first3=Joaquim |last4=Huchette |first4=Joey |last5=Legat |first5=Benoît |last6=Vielma |first6=Juan Pablo |date=2023 |title=JuMP 1.0: Recent improvements to a modeling language for mathematical optimization | journal = Mathematical Programming Computation | doi = 10.1007/s12532-023-00239-3 |eprint=2206.03866 }}</ref>
|-
|ROME
|
|Modeling system for robust optimization. Supports distributionally robust optimization and [[uncertainty set]]s.
▲|GloptiPoly 3
▲Octave
▲|Modeling system for polynomial optimization.
|<ref name=":3" />
|-
Line 234 ⟶ 212:
|
|Modeling system for [[polynomial optimization]]. Uses SDPT3 and SeDuMi. Requires Symbolic Computation Toolbox.
|<ref name=":3" />
|-
Line 240 ⟶ 218:
|
|Modeling system for polynomial optimization. Uses the SDPA or SeDuMi solvers.
▲|<ref name=":3" />
▲|-
▲|YALMIP
▲|MATLAB, Octave
▲|Interfaces with CPLEX, GUROBI, MOSEK, SDPT3, SEDUMI, CSDP, SDPA, PENNON solvers; also supports integer and nonlinear optimization, and some nonconvex optimization. Can perform [[robust optimization]] with uncertainty in LP/SOCP/SDP constraints.
!{{Yes}}
▲|<ref name=":3" />
{| class="wikitable sortable"
▲|+
!Program
!Language
!Description
![[Free and open-source software|FOSS]]?
!Ref
▲|-
▲|AIMMS
▲|
▲|Can do robust optimization on linear programming (with MOSEK to solve second-order cone programming) and [[mixed integer linear programming]]. Modeling package for LP + SDP and robust versions.
!{{No}}
|<ref name=":3" />
|-
Line 246 ⟶ 244:
|
|Supports primal-dual methods for LP + SOCP. Can solve LP, QP, SOCP, and mixed integer linear programming problems.
|<ref name=":3" />
|-
Line 252 ⟶ 250:
|[[C (programming language)|C]]
|Supports primal-dual methods for LP + SDP. Interfaces available for MATLAB, [[R (programming language)|R]], and Python. Parallel version available. SDP solver.
|<ref name=":3" />
|-
Line 258 ⟶ 256:
|Python
|Supports primal-dual methods for LP + SOCP + SDP. Uses Nesterov-Todd scaling. Interfaces to MOSEK and DSDP.
|<ref name=":3" />
|-
Line 264 ⟶ 262:
|
|Supports primal-dual methods for LP + SOCP.
|<ref name=":3" />
|-
Line 270 ⟶ 268:
|MATLAB, Octave, [[MEX file|MEX]]
|Solves LP + SOCP + SDP. Supports primal-dual methods for LP + SOCP + SDP.
|<ref name=":3" />
|-
Line 276 ⟶ 274:
|[[C++]]
|Solves LP + SDP. Supports primal-dual methods for LP + SDP. Parallelized and extended precision versions are available.
|<ref name=":3" />
|-
Line 282 ⟶ 280:
|MATLAB, Octave, MEX
|Solves LP + SOCP + SDP. Supports primal-dual methods for LP + SOCP + SDP.
|<ref name=":3" />
|-
Line 288 ⟶ 286:
|
|Supports general-purpose codes for LP + SOCP + SDP. Uses a bundle method. Special support for SDP and SOCP constraints.
|<ref name=":3" />
|-
Line 294 ⟶ 292:
|
|Supports general-purpose codes for LP + SDP. Uses a dual interior point method.
|<ref name=":3" />
|-
Line 300 ⟶ 298:
|
|Supports general-purpose codes for SOCP, which it treats as a nonlinear programming problem.
|<ref name=":3" />
|-
Line 306 ⟶ 304:
|
|Supports general-purpose codes. Uses an augmented Lagrangian method, especially for problems with SDP constraints.
|<ref name=":3" />
|-
Line 312 ⟶ 310:
|
|Supports general-purpose codes. Uses low-rank factorization with an augmented Lagrangian method.
▲|GAMS
▲|Modeling system for linear, nonlinear, mixed integer linear/nonlinear, and second-order cone programming problems.
|<ref name=":3" />
|}
== Applications ==
Convex optimization can be used to model problems in a wide range of disciplines, such as automatic [[control systems]], estimation and [[signal processing]], communications and networks, electronic [[circuit design]],<ref name=":2" />{{Rp|___location=|page=17}} data analysis and modeling, [[finance]], [[statistics]] ([[optimal design|optimal experimental design]]),<ref>
* [[Portfolio optimization]].<ref name=":0">{{Cite web |last1=Boyd |first1=Stephen |last2=Diamond |first2=Stephen |last3=Zhang |first3=Junzi |last4=Agrawal |first4=Akshay |title=Convex Optimization Applications |url=https://web.stanford.edu/~boyd/papers/pdf/cvx_applications.pdf |url-status=live |archive-url=https://web.archive.org/web/20151001185038/http://web.stanford.edu/~boyd/papers/pdf/cvx_applications.pdf |archive-date=2015-10-01 |access-date=12 Apr 2021}}</ref>
|