Content deleted Content added
m Mikhail Ryazanov moved page Active set method to Active-set method: MOS:HYPHEN |
Typo "intital" fixed to "initial". |
||
(10 intermediate revisions by 10 users not shown) | |||
Line 1:
{{redirect|Active set|the band|The Active Set}}
In mathematical [[Optimization (mathematics)|optimization]],
An optimization problem is defined using an objective function to minimize or maximize, and a set of constraints
: <math>g_1(x) \ge 0, \dots, g_k(x) \ge 0</math>
Line 9 ⟶ 11:
: <math>g_i(x) \ge 0</math>
is called '''active''' at <math>
The active set is particularly important in optimization theory, as it determines which constraints will influence the final result of optimization. For example, in solving the [[linear programming]] problem, the active set gives the [[hyperplane]]s that intersect at the solution point. In [[quadratic programming]], as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search.
Line 21 ⟶ 23:
:: ''compute'' the [[Lagrange multipliers]] of the active set
:: ''remove'' a subset of the constraints with negative Lagrange multipliers
:: ''search'' for infeasible constraints among the inactive constraints and add them to the problem
: '''end repeat'''
The motivations for this is that near the optimum usually only a small number of all constraints are binding and the solve step usually takes superlinear time in the amount of constraints. Thus repeated solving of a series equality constrained problem, which drop constraints which are not violated when improving but are in the way of improvement (negative lagrange multipliers) and adding of those constraints which the current solution violates can converge against the true solution. The optima of the last problem can often provide an initial guess in case the equality constrained problem solver needs an initial value.
Methods that can be described as '''active-set methods''' include:<ref>{{harvnb|Nocedal|Wright|2006|pp=467–480}}</ref>
Line 33 ⟶ 37:
<!-- ? Method of feasible directions (MFD) -->
<!-- ? Gradient projection method - alt: acc. to "Optimization - Theory and Practice" (Forst, Hoffmann): Projection method -->
== Performance ==
Consider the problem of Linearly Constrained Convex Quadratic Programming. Under reasonable assumptions (the problem is feasible, the system of constraints is regular at every point, and the quadratic objective is strongly convex), the active-set method terminates after finitely many steps, and yields a global solution to the problem. Theoretically, the active-set method may perform a number of iterations exponential in ''m'', like the [[simplex method]]. However, its practical behaviour is typically much better.<ref name=":0">{{Cite web |last=Nemirovsky and Ben-Tal |date=2023 |title=Optimization III: Convex Optimization |url=http://www2.isye.gatech.edu/~nemirovs/OPTIIILN2023Spring.pdf}}</ref>{{Rp|___location=Sec.9.1}}
==References==
{{Reflist
==Bibliography==
* {{cite book |last=Murty |first=K. G. |title=Linear complementarity, linear and nonlinear programming |series=Sigma Series in Applied Mathematics |volume=3 |publisher=Heldermann Verlag |___location=Berlin |year=1988 |pages=xlviii+629
* {{Cite book | last1=Nocedal | first1=Jorge | last2=Wright | first2=Stephen J. | title=Numerical Optimization | publisher=[[Springer-Verlag]] | ___location=Berlin, New York | edition=2nd | isbn=978-0-387-30303-1 | year=2006
[[Category:Optimization algorithms and methods]]
|