Content deleted Content added
ce; restore recently removed spacing after headings |
|||
Line 2:
===Relation to constraint-satisfaction problems===
The constrained-optimization problem (COP) is a significant generalization of the classic [[constraint-satisfaction problem]] (CSP) model<ref>{{Citation|last=Rossi|first=Francesca|title=Chapter 1 – Introduction|date=2006-01-01|url=http://www.sciencedirect.com/science/article/pii/S1574652606800052|work=Foundations of Artificial Intelligence|volume=2|pages=3–12|editor-last=Rossi|editor-first=Francesca|series=Handbook of Constraint Programming|publisher=Elsevier|doi=10.1016/s1574-6526(06)80005-2|access-date=2019-10-04|last2=van Beek|first2=Peter|last3=Walsh|first3=Toby|editor2-last=van Beek|editor2-first=Peter|editor3-last=Walsh|editor3-first=Toby}}</ref>. COP is a CSP that includes an ''objective function'' to be optimized. Many algorithms are used to handle the optimization part.
Line 25 ⟶ 26:
===Equality constraints===
====Substitution method====
For very simple problems, say a function of two variables subject to a single equality constraint, it is most practical to apply the method of substitution.<ref>{{cite book |first=Mike |last=Prosser |title=Basic Mathematics for Economists |___location=New York |publisher=Routledge |year=1993 |isbn=0-415-08424-5 |chapter=Constrained Optimization by Substitution |pages=338–346 }}</ref> The idea is to substitute the constraint into the objective function to create a [[Function composition|composite function]] that incorporates the effect of the constraint. For example, assume the objective is to maximize <math>f(x,y) = x \cdot y</math> subject to <math>x + y = 10</math>. The constraint implies <math>y = 10 - x</math>, which can be substituted into the objective function to create <math>p(x) = x (10 - x) = 10x - x^{2}</math>. The first-order necessary condition gives <math>\frac{\partial p}{\partial x} = 10 - 2x = 0</math>, which can be solved for <math>x=5</math> and, consequently, <math>y = 10 - 5 = 5</math>.
====Lagrange multiplier====
If the constrained problem has only equality constraints, the method of [[Lagrange multipliers]] can be used to convert it into an unconstrained problem whose number of variables is the original number of variables plus the original number of equality constraints. Alternatively, if the constraints are all equality constraints and are all linear, they can be solved for some of the variables in terms of the others, and the former can be substituted out of the objective function, leaving an unconstrained problem in a smaller number of variables.
===Inequality constraints===
With inequality constraints, the problem can be characterized in terms of the [[geometric optimality conditions]], [[Fritz John conditions]] and [[Karush–Kuhn–Tucker conditions]], under which simple problems may be solvable.
====Linear programming====
If the objective function and all of the hard constraints are linear and some hard constraints are inequalities, then the problem is a [[linear programming]] problem. This can be solved by the [[simplex method]], which usually works in [[polynomial time]] in the problem size but is not guaranteed to, or by [[interior point method]]s which are guaranteed to work in polynomial time.
====Nonlinear programming====
If the objective function or some of the constraints are nonlinear, and some constraints are inequalities, then the problem is a [[nonlinear programming]] problem.
====Quadratic programming====
If all the hard constraints are linear and some are inequalities, but the objective function is quadratic, the problem is a [[quadratic programming]] problem. It is one type of nonlinear programming. It can still be solved in polynomial time by the [[ellipsoid method]] if the objective function is [[Convex function|convex]]; otherwise the problem
====KKT conditions====
Allowing inequality constraints, the [[Karush-Kuhn-Tucker conditions|KKT approach]] to nonlinear programming generalizes the method of Lagrange multipliers. It can be applied under differentiability and convexity.
====Branch and bound====
Constraint optimization can be solved by [[branch
Assuming that cost is to be minimized, the efficiency of these algorithms depends on how the cost that can be obtained from extending a partial solution is evaluated. Indeed, if the algorithm can backtrack from a partial solution, part of the search is skipped. The lower the estimated cost, the better the algorithm, as a lower estimated cost is more likely to be lower than the best cost of solution found so far.
Line 78 ⟶ 88:
==See also==
* [[Constrained least squares]]
* [[Distributed constraint optimization]]
Line 87 ⟶ 98:
==References==
{{reflist}}
==Further reading==
*{{cite book |first=Dimitri P. |last=Bertsekas |authorlink=Dimitri Bertsekas |title=Constrained Optimization and Lagrange Multiplier Methods |___location=New York |publisher=Academic Press |year=1982 |isbn=0-12-093480-9 }}
*{{cite book
|