Active-set method: Difference between revisions

Content deleted Content added
Monkbot (talk | contribs)
m Bibliography: Task 16: replaced (1×) / removed (0×) deprecated |dead-url= and |deadurl= with |url-status=;
Typo "intital" fixed to "initial".
 
(9 intermediate revisions by 9 users not shown)
Line 1:
{{redirect|Active set|the band|The Active Set}}
 
In mathematical [[Optimization (mathematics)|optimization]], athe problem'''active-set ismethod''' defined usingis an objectivealgorithm functionused to minimizeidentify orthe maximize,active and[[Constraint (mathematics)|constraints]] in a set of [[Inequality (mathematics)|inequality]] constraints. The active constraints are then expressed as equality constraints, thereby transforming an inequality-constrained problem into a simpler equality-constrained subproblem.
 
An optimization problem is defined using an objective function to minimize or maximize, and a set of constraints
 
: <math>g_1(x) \ge 0, \dots, g_k(x) \ge 0</math>
Line 9 ⟶ 11:
: <math>g_i(x) \ge 0</math>
 
is called '''active''' at <math>xx_0</math> if <math>g_i(xx_0) = 0</math>, and '''inactive''' at <math>xx_0</math> if <math>g_i(xx_0) > 0.</math> Equality constraints are always active. The '''active set''' at <math>xx_0</math> is made up of those constraints <math>g_i(xx_0)</math> that are active at the current point {{harv|Nocedal|Wright|2006|p=308}}.
 
The active set is particularly important in optimization theory, as it determines which constraints will influence the final result of optimization. For example, in solving the [[linear programming]] problem, the active set gives the [[hyperplane]]s that intersect at the solution point. In [[quadratic programming]], as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search.
Line 21 ⟶ 23:
:: ''compute'' the [[Lagrange multipliers]] of the active set
:: ''remove'' a subset of the constraints with negative Lagrange multipliers
:: ''search'' for infeasible constraints among the inactive constraints and add them to the problem
: '''end repeat'''
 
The motivations for this is that near the optimum usually only a small number of all constraints are binding and the solve step usually takes superlinear time in the amount of constraints. Thus repeated solving of a series equality constrained problem, which drop constraints which are not violated when improving but are in the way of improvement (negative lagrange multipliers) and adding of those constraints which the current solution violates can converge against the true solution. The optima of the last problem can often provide an initial guess in case the equality constrained problem solver needs an initial value.
 
Methods that can be described as '''active-set methods''' include:<ref>{{harvnb|Nocedal|Wright|2006|pp=467–480}}</ref>
Line 33 ⟶ 37:
<!-- ? Method of feasible directions (MFD) -->
<!-- ? Gradient projection method - alt: acc. to "Optimization - Theory and Practice" (Forst, Hoffmann): Projection method -->
 
== Performance ==
Consider the problem of Linearly Constrained Convex Quadratic Programming. Under reasonable assumptions (the problem is feasible, the system of constraints is regular at every point, and the quadratic objective is strongly convex), the active-set method terminates after finitely many steps, and yields a global solution to the problem. Theoretically, the active-set method may perform a number of iterations exponential in ''m'', like the [[simplex method]]. However, its practical behaviour is typically much better.<ref name=":0">{{Cite web |last=Nemirovsky and Ben-Tal |date=2023 |title=Optimization III: Convex Optimization |url=http://www2.isye.gatech.edu/~nemirovs/OPTIIILN2023Spring.pdf}}</ref>{{Rp|___location=Sec.9.1}}
 
==References==
{{Reflist|30em}}
 
==Bibliography==
* {{cite book |last=Murty |first=K. G. |title=Linear complementarity, linear and nonlinear programming |series=Sigma Series in Applied Mathematics |volume=3 |publisher=Heldermann Verlag |___location=Berlin |year=1988 |pages=xlviii+629  pp. |isbn=3-88538-403-5 |url=http://ioe.engin.umich.edu/people/fac/books/murty/linear_complementarity_webbook/ |ref=harv |MRmr=949214 |access-date=2010-04-03 |archive-url=https://web.archive.org/web/20100401043940/http://ioe.engin.umich.edu/people/fac/books/murty/linear_complementarity_webbook/ |archive-date=2010-04-01 |url-status=dead }}
* {{Cite book | last1=Nocedal | first1=Jorge | last2=Wright | first2=Stephen J. | title=Numerical Optimization | publisher=[[Springer-Verlag]] | ___location=Berlin, New York | edition=2nd | isbn=978-0-387-30303-1 | year=2006 | ref=harv | postscript=.}}
 
[[Category:Optimization algorithms and methods]]