Constrained optimization: Difference between revisions

Content deleted Content added
clarification and elaboration in lead
Definition: fixing back-and-forth confusion between the two related concepts
Line 1:
In [[mathematical optimization]], '''constrained optimization''' (in some contexts called '''constraint optimization''') is the process of optimizing an objective function in the presence of [[Constraint (mathematics)|constraints]]: either ''hard constraints'' which are required to be satisfied, or ''soft constraints'' which are penalized in the objective function if, and based on the extent that, they are not satisfied. The objective function is either a [[Loss function|cost function]] which is to be minimized or a function, sometimes called a [[utility function]], which is to be maximized.
 
==DefinitionGeneral form==
 
A constraint optimization problem can be defined as a regular [[constraint satisfaction problem]] in which constraints are weighted and the goal is to find a solution maximizing the weight of satisfied constraints.
 
Alternatively, a constraint optimization problem can be defined as a regular constraint satisfaction problem augmented with a number of "local" cost functions. The aim of constraint optimization is to find a solution to the problem whose cost, evaluated as the sum of the cost functions, is minimized. The regular constraints are called ''hard constraints'', while the cost functions are called ''soft constraints''. These names illustrate that hard constraints are to be necessarily satisfied, while soft constraints only express a preference of some solutions (those having a high or low cost) over other ones (those having lower/higher cost).
 
A general constrained optimization problem may be written as follows:
Line 11 ⟶ 7:
<math>
\begin{array}{rcll}
\maxmin &~& f(\mathbf{x}) & \\
\mathrm{subject~to} &~& g_i(\mathbf{x}) = c_i &\mathrm{for~} i=1,\cdots,n \quad \rm{Equality~constraints} \\
&~& h_j(\mathbf{x}) \lege d_j &\mathrm{for~} j=1,\cdots,m \quad \rm{Inequality~constraints}
\end{array}
</math>
 
where <math>\mathbf{x}</math> is a vector residing in a n-dimensional space, <math>f(\mathbf{x})</math> is a scalar valued objective function, <math> g_i(\mathbf{x}) = c_i ~\mathrm{for~} i=1,\cdots,n </math> and <math> h_j(\mathbf{x}) \lege d_j ~\mathrm{for~} j=1,\cdots,m </math> are constraint functionsconstraints that needare required to be satisfied; these are called [[Constraint (mathematics)#Hard and soft constraints|hard constraints]].
 
In some problems, often called ''constraint optimization problems'', the objective function is actually the sum of cost functions each of which penalizes the extent (if any) to which a [[Constraint (mathematics)#Hard and soft constraints|soft constraint]] (a constraint which is preferred but not required to be satisfied) is violated.
 
==Solution methods==