Active-set method: Difference between revisions

Content deleted Content added
Typo "intital" fixed to "initial".
 
(23 intermediate revisions by 15 users not shown)
Line 1:
{{redirect|Active set|the band|The Active Set}}
 
In mathematical [[Optimization (mathematics)|optimization]], athe problem'''active-set ismethod''' defined usingis an objectivealgorithm functionused to minimizeidentify orthe maximize,active and[[Constraint (mathematics)|constraints]] in a set of [[Inequality (mathematics)|inequality]] constraints. The active constraints are then expressed as equality constraints, thereby transforming an inequality-constrained problem into a simpler equality-constrained subproblem.
 
An optimization problem is defined using an objective function to minimize or maximize, and a set of constraints
:<math>g_1(x)\ge 0, \dots, g_k(x)\ge 0</math>
 
: <math>g_1(x) \ge 0, \dots, g_k(x) \ge 0</math>
 
that define the [[feasible region]], that is, the set of all ''x'' to search for the optimal solution. Given a point <math>x</math> in the feasible region, a constraint
:<math>g_i(x) \ge 0</math>
is called '''active''' at <math>x</math> if <math>g_i(x)=0</math> and '''inactive''' at <math>x</math> if <math>g_i(x)>0.</math> Equality constraints are always active. The '''active set''' at <math>x</math> is made up of those constraints <math>g_i(x)</math> that are active at the current point {{harv|Nocedal|Wright|2006|p=308}}.
 
: <math>g_i(x) \ge 0</math>
The active set is particularly important in optimization theory as it determines which constraints will influence the final result of optimization. For example, in solving the [[linear programming]] problem, the active set gives the [[Hyperplane|hyperplanes]] that intersect at the solution point. In [[quadratic programming]], as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search.
 
is called '''active''' at <math>xx_0</math> if <math>g_i(xx_0) = 0</math>, and '''inactive''' at <math>xx_0</math> if <math>g_i(xx_0) > 0.</math> Equality constraints are always active. The '''active set''' at <math>xx_0</math> is made up of those constraints <math>g_i(xx_0)</math> that are active at the current point {{harv|Nocedal|Wright|2006|p=308}}.
==Active set methods==
In general an active set algorithm has the following structure:
 
The active set is particularly important in optimization theory, as it determines which constraints will influence the final result of optimization. For example, in solving the [[linear programming]] problem, the active set gives the [[Hyperplane|hyperplaneshyperplane]]s that intersect at the solution point. In [[quadratic programming]], as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search.
:Find a feasible starting point
:'''repeat until''' "optimal enough"
::''solve'' the equality problem defined by the active set (approximately)
::''compute'' the [[Lagrange multipliers]] of the active set
::''remove'' a subset of the constraints with negative Lagrange multipliers
::''search'' for infeasible constraints
:'''end repeat'''
 
==Active -set methods==
Methods that can be described as '''active set methods''' include<ref>Nocedal and Wright (2010), pp. 467–480</ref>.:
In general an active -set algorithm has the following structure:
 
: Find a feasible starting point
: '''repeat until''' "optimal enough"
:: ''solve'' the equality problem defined by the active set (approximately)
:: ''compute'' the [[Lagrange multipliers]] of the active set
:: ''remove'' a subset of the constraints with negative Lagrange multipliers
:: ''search'' for infeasible constraints among the inactive constraints and add them to the problem
: '''end repeat'''
 
The motivations for this is that near the optimum usually only a small number of all constraints are binding and the solve step usually takes superlinear time in the amount of constraints. Thus repeated solving of a series equality constrained problem, which drop constraints which are not violated when improving but are in the way of improvement (negative lagrange multipliers) and adding of those constraints which the current solution violates can converge against the true solution. The optima of the last problem can often provide an initial guess in case the equality constrained problem solver needs an initial value.
 
Methods that can be described as '''active -set methods''' include:<ref>{{harvnb|Nocedal and |Wright (2010), |2006|pp. =467–480}}</ref>.:
* [[Successive linear programming]] (SLP) <!-- acc. to: Leyffer... - alt: acc. to "MPS glossary", http://glossary.computing.society.informs.org/ver2/mpgwiki/index.php/Main_Page: Successive approximation -->
* [[Sequential quadratic programming]] (SQP) <!-- acc. to: Leyffer... - alt: acc. to "MPS glossary", http://glossary.computing.society.informs.org/ver2/mpgwiki/index.php/Main_Page: Successive approximation -->
Line 31 ⟶ 37:
<!-- ? Method of feasible directions (MFD) -->
<!-- ? Gradient projection method - alt: acc. to "Optimization - Theory and Practice" (Forst, Hoffmann): Projection method -->
 
== Performance ==
Consider the problem of Linearly Constrained Convex Quadratic Programming. Under reasonable assumptions (the problem is feasible, the system of constraints is regular at every point, and the quadratic objective is strongly convex), the active-set method terminates after finitely many steps, and yields a global solution to the problem. Theoretically, the active-set method may perform a number of iterations exponential in ''m'', like the [[simplex method]]. However, its practical behaviour is typically much better.<ref name=":0">{{Cite web |last=Nemirovsky and Ben-Tal |date=2023 |title=Optimization III: Convex Optimization |url=http://www2.isye.gatech.edu/~nemirovs/OPTIIILN2023Spring.pdf}}</ref>{{Rp|___location=Sec.9.1}}
 
==References==
{{Reflist}}
* {{cite book|last=Murty|first=K. G.|title=Linear complementarity, linear and nonlinear programming|series=Sigma Series in Applied Mathematics|volume=3|publisher=Heldermann Verlag|___location=Berlin|year=1988|pages=xlviii+629 pp.|isbn=3-88538-403-5|url=http://ioe.engin.umich.edu/people/fac/books/murty/linear_complementarity_webbook/}} {{MR|949214}}
* {{Cite book | last1=Nocedal | first1=Jorge | last2=Wright | first2=Stephen J. | title=Numerical Optimization | publisher=[[Springer-Verlag]] | ___location=Berlin, New York | edition=2nd | isbn=978-0-387-30303-1 | year=2006 | ref=harv | postscript=<!--None-->}}.
 
==Bibliography==
* {{cite book |last=Murty |first=K. G. |title=Linear complementarity, linear and nonlinear programming |series=Sigma Series in Applied Mathematics |volume=3 |publisher=Heldermann Verlag |___location=Berlin |year=1988 |pages=xlviii+629 pp. |isbn=3-88538-403-5 |url=http://ioe.engin.umich.edu/people/fac/books/murty/linear_complementarity_webbook/}} {{MR|mr=949214 |access-date=2010-04-03 |archive-url=https://web.archive.org/web/20100401043940/http://ioe.engin.umich.edu/people/fac/books/murty/linear_complementarity_webbook/ |archive-date=2010-04-01 |url-status=dead }}
* {{Cite book | last1=Nocedal | first1=Jorge | last2=Wright | first2=Stephen J. | title=Numerical Optimization | publisher=[[Springer-Verlag]] | ___location=Berlin, New York | edition=2nd | isbn=978-0-387-30303-1 | year=2006 | ref=harv | postscript=<!--None-->}}.
 
[[Category:Mathematical optimization]]
[[Category:Optimization algorithms and methods]]