Content deleted Content added
mNo edit summary |
No edit summary |
||
Line 1:
'''Robust optimization'''. A term given to an approach to deal with uncertainty, similar to the recourse model of [[stochastic programming]], except that feasibility for all possible realizations (called scenarios) is replaced by a [[penalty function]] in the objective. As such, the approach integrates [[goal programming]] with a scenario-based description of problem data. To illustrate, consider the LP:
:<math>\
where d, B, C and e are random variables with possible realizations {(d(s), B(s), C(s), e(s): s in {1,...,N}} (N = number of scenarios). The robust optimization model for this LP is:
Line 7:
<math>\min f(x, y(1), ..., y(N)) + wP(z(1), ..., z(N)): Ax=b, x \le 0,</math>
<math>B(s)x + C(s)y(s) + z(s) = e(s), and y(s) >= 0, for all s = 1,...,N,</math>
where f is a function that measures the cost of the policy, P is a penalty function, and w > 0 (a parameter to trade off the cost of infeasibility). One example of f is the expected value: f(x, y) = cx + Sum_s{d(s)y(s)p(s)}, where p(s) = probability of scenario s. In a worst-case model, f(x,y) = Max_s{d(s)y(s)}. The '''penalty function''' is defined to be zero if (x, y) is feasible (for all scenarios) -- i.e., P(0)=0. In addition, P satisfies a form of monotonicity: worse violations incur greater penalty. This often has the form P(z) = U(z^+) + V(-z^-) -- i.e., the "up" and "down" penalties, where U and V are strictly increasing functions.
|