Content deleted Content added
Revamped the article as per comment in the discussion. |
|||
Line 1:
Robust optimization is a field of [[optimization_(mathematics)|optimization]] theory that deals with optimization problems where robustness is sought against [[uncertainty]] and/or variability in the value of a parameter of the problem.
== History ==
The origins of robust optimization date back to the establishment of modern [[decision theory]] in the 1950s and the use of '''worst case analysis''' and [[Wald's_maximin_model|Wald's maximin model]] as a tool for the treatment of severe uncertainty. It became a field of its own in the 1970s with parallel developments in fields such as [[operations research]], [[control theory]], [[statistics]], [[economics]], and more.
== Example ==
Consider the simple [[linear programming]] problem
:<math>
where <math>P</math> is a given subset of <math>\mathbb{R}^{2}</math>.
What makes this a 'robust optimization' problem is the <math>\forall (c,d)\in P</math> clause in the constraints. Its implication is that for a pair <math>(x,y)</math> to be admissible, the constraint <math>cx + dy \le 10</math> must be satisfied by the '''worst''' <math>(c,d)\in P</math> pertaining to <math>(x,y)</math>, namely the pair <math>(c,d)\in P</math> that minimizes the value of <math>cx + dy</math> for the given value of <math>(x,y)</math>.
If the parameter space <math>P</math> is finite (consisting of finitely many elements), then this robust optimization problem itself is a [[linear programming]] problem: for each <math>(c,d)\in P</math> there is a linear constraint <math>cx + dy \le 10</math>.
If <math>P</math> is not a finite set, then this problem is a linear [[semi-infinite programming]] problem, namely a linear programming problem with finitely many (2) decision variables and infinitely many constraints.
== Classification ==
There are a number of classification criteria for robust optimization problems/models. In particular, one can distinguish between problems dealing with '''local''' and '''global''' models of robustness; and between '''probabilistic''' and '''non-probabilistic''' models of robustness. Modern robust optimization deals primarily with non-probabilistic models of robustness that are [[worst case]] oriented and as such usually deploy [[Wald's_maximin_model|Wald's maximin models]].
== Local robustness ==
There are cases where robustness is sought against small perturbations in a nominal value of a parameter. A very popular model of local robustness is the [[stability radius|radius of stability]] model:
: <math>\hat{\rho}(x,\hat{u}):= \max_{\rho\ge 0}\ \{\rho: u\in S(x), \forall u\in B(\rho,\hat{u})\}</math>
where <math>\hat{u}</math> denotes the nominal value of the parameter, <math>B(\rho,\hat{u})</math> denotes a ball of radius <math>\rho</math> centered at <math>\hat{u}</math> and <math>S(x)</math> denotes the set of values of <math>u</math> that satisfy given stability/performance conditions associated with decision <math>x</math>.
In words, the robustness (radius of stability) of decision <math>x</math> is the radius of the largest ball centered at <math>\hat{u}</math> all of whose elements satisfy the stability requirements imposed on <math>x</math>. The picture is this:
[[Image:Local_robustness.png|500px]]
where the rectangle <math>U(x)</math> represents the set of all the values <math>u</math> associated with decision <math>x</math>.
== Global robustness ==
Consider the simple abstract robust optimization problem
where <math>U</math> denotes the set of all ''possible'' values of <math>u</math> under consideration.
This is a ''global'' robust optimization problem in the sense that the robustness constraint <math>g(x,u)\le b, \forall u\in U</math> represents all the ''possible'' values of <math>u</math>.
The difficulty is that such a "global" constraint can be too demanding in that there is no <math>x\in X</math> that satisfies this constraint. But even if such an <math>x\in X</math> exists, the constraint can be too "conservative" in that it yields a solution <math>x\in X</math> that generates a very small payoff <math>f(x)</math> that is not representative of the performance of other decisions in <math>X</math>. For instance, there could be an <math>x'\in X</math> that only slightly violates the robustness constraint but yields a very large payoff <math>f(x')</math>.
In such cases it might be necessary to relax a bit the robustness constraint.
== Non-probabilistic robust optimization models ==
The dominating paradigm in this area of robust optimization is [[Wald's_maximin_model|Wald's maximin model]], namely
: <math>\max_{x\in X}\min_{u\in U(x)} f(x,u)</math>
where the <math>\max</math> represents the decision maker, the <math>\min</math> represents Nature, namely [[uncertainty]], <math>X</math> represents the decision space and <math>U(x)</math> denotes the set of possible values of <math>u</math> associated with decision <math>x</math>. This is the ''classic'' format of the generic model.
The equivalent [[mathematical programming]] (MP) format is
:<math>\max_{x\in X,v\in \mathbb{R}} \ \{v: v\le f(x,u), \forall u\in U(x)\}</math>
Constraints can be incorporated explicitly in these models. The generic constrained classic format is
: <math>\max_{x\in X}\min_{u\in U(x)} \ \{f(x,u): g(x,u)\le b,\forall u\in U(x)\}</math>
The equivalent constrained MP format is
:<math>\max_{x\in X,v\in \mathbb{R}} \ \{v: v\le f(x,u), g(x,u)\le b, \forall u\in U(x)\}</math>
== Probabilistic robust optimization models ==
These models quantify the uncertainty in the "true" value of the parameter of interest by probability distribution functions. They have been traditionally classified as [[stochastic programming]] and [[stochastic optimization]] models.
▲:<math>\min_x {\max_{s \in S} f(x; s)}\, x \in X(t)\, \forall t \in S,</math>
== See also ==
* [[
* [[Minimax]]
* [[Minimax regret]]
* [[Robust statistics]]
* [[Stochastic programming]]
* [[Stochastic optimization]]
* [[Info-gap decision theory]]
==External links==
|