Robust optimization: Difference between revisions

Content deleted Content added
Sdudah (talk | contribs)
mNo edit summary
Sdudah (talk | contribs)
mNo edit summary
Line 9:
<math>\ B(s)x + C(s)y(s) + z(s) = e(s),</math> and <math>y(s) \ge 0,\, \forall s = 1,...,N,</math>
 
where f is a function that measures the cost of the policy, P is a penalty function, and w > 0 (a parameter to trade off the cost of infeasibility). One example of f is the expected value: <math>\ f(x, y) = cx + \sum_s{d(s)y(s)p(s)}</math>, where p(s) = probability of scenario s. In a worst-case model, <math>\ f(x,y) = \max_s{d(s)y(s)}</math>. The '''penalty function''' is defined to be zero if (x, y) is feasible (for all scenarios) -- i.e., P(0)=0. In addition, P satisfies a form of monotonicity: worse violations incur greater penalty. This often has the form <math>\ P(z) = U(z^+) + V(-z^-)</math> -- i.e., the "up" and "down" penalties, where U and V are strictly increasing functions.
 
The above makes robust optimization similar (at least in the model) to a [[goal program]]. Recently, the robust optimization community defines it differently – it optimizes for the worst-case scenario. Let the uncertain MP be given by