Karmarkar–Karp bin packing algorithms: Difference between revisions

Content deleted Content added
Line 135:
* If the approximate oracle returns a solution with value larger than 1, then <math>\mathbf{y}_t</math> is definitely infeasible, and the solution correspond to a configuration that violates a constraint '''a'''. We do a "feasibility cut" in '''<math>\mathbf{y}_t</math>''', cutting the ellipsoid all points '''y''' for which <math>\mathbf{a}\cdot \mathbf{y} > 1</math>.
* If the approximate oracle returns a solution with value at most 1, then '''<math>\mathbf{y}_t</math>''' may or may not be feasible, but '''<math>\mathbf{y}_t</math>''' rounded down (denote it by '''<math>\mathbf{z}_t</math>''') is feasible. By definition of the rounding, we know that <math>\mathbf{n}\cdot \mathbf{z}_t \geq \mathbf{n}\cdot \mathbf{y}_t - \mathbf{n}\cdot \mathbf{1}\cdot (\delta/n) = \mathbf{n}\cdot \mathbf{y}_t - \delta</math>. We still do an "optimality cut" in '''<math>\mathbf{y}_t</math>''': we cut from the ellipsoid all points '''y''' for which <math>\mathbf{n}\cdot \mathbf{y} < \mathbf{n}\cdot \mathbf{y}_t</math>. For the removed points, <math>\mathbf{n}\cdot \mathbf{y} < \mathbf{n}\cdot \mathbf{z}_t+\delta</math>. Therefore, the remaining points are within at most <math>\delta</math> of the optimal solution.
Using the approximate separation oracle gives a feasible solution '''y*''' to the dual LP, with <math>\mathbf{n}\cdot \mathbf{y^*} \geq LOPT-\delta</math>, after at most ''<math>MQ</math>'' iterations, where <math>MQ = 4m^2 \ln(mn/g\delta)</math>.
 
=== Eliminating constraints ===
As a byproduct of this process, we can consider only the constraints that are activated during the run of the ellipsoid method; there are at most ''<math>M</math> ''such constraints. We add to these, the trivial constraints <math>0 \leq y_j \leq 1</math>. All the other constraints have no effect on the computation of the solution '''y*'''. We can eliminate even more constraints by running this procedure iteratively: we choose a set of constraints, remove them, and find an approximate solution of the dual without these constraints. If the new solution is within the tolerance ''t'' of the old solution, then the constraints are dropped; otherwise, they are retained and a new set of constraints is tried. It can be shown that, after sufficiently many attempts, we will get a dual LP with only ''m'' constraints, which has the same (approximate) solution as the original dual LP. The run-time is <math>O\left(m^8 \log{m} \log^2(\frac{m n}{g h}) + \frac{m^4 n \log{m}}{h}\log(\frac{m n}{g h}) \right)</math>. If the constraints to remove are chosen at random, then the run-time becomes <math>O\left(m^7 \log{m} \log^2(\frac{m n}{g h}) + \frac{m^4 n \log{m}}{h}\log(\frac{m n}{g h}) \right)</math>.
During the ellipsoid method, we use at most ''Q'' constraints of the form <math>\mathbf{a}\cdot \mathbf{y} \leq 1</math>. All the other constraints can be eliminated, since they have no effect on the outcome '''y*''' of the ellipsoid method. We can eliminate even more constraints. It is known that, in any LP with ''m'' variables, there is a set of ''m'' constraints that is sufficient for determining the optimal solution (that is, the optimal value is the same even if only these ''m'' constraints are used). We can repeatedly run the ellipsoid method as above, each time trying to remove a specific set of constraints. If the resulting error is at most <math>\delta</math>, then we remove these constraints permanently. It can be shown that we need at most <math>\approx (Q/m) + m\ln(Q/m)</math> eliminations, so the accumulating error is at most <math>\approx \delta\cdot [(Q/m) + m\ln(Q/m)]</math>. If we try sets of constraints deterministically, then in the worst case, one out of ''m'' trials succeeds, so we need to run the ellipsoid method at most <math>\approx m\cdot[(Q/m) + m\ln(Q/m)]</math> times. If we choose the constraints to remove at random, then the expected number of iterations is <math> O(m)\cdot[1 + \ln(Q/m)]</math>.
 
Finally, we have a reduced dual LP, with only ''m'' variables and ''m'' constraints. The optimal value of the reduced LP is at least <math> LOPT-\epsilon</math>, where <math>\epsilon \approx \delta\cdot [(Q/m) + m\ln(Q/m)]</math>.
 
=== Solving the primal LP ===
By the [[Dual linear program|LP duality theorem]], the optimalminimum value of the primal LP equals the optimalmaximum value of the dual LP;, which we denoted this optimal value by LOPT.
Once we have a reduced dual LP, we take its dual, and take a reduced primal LP. This LP has only ''m'' variables - corresponding to only ''m'' out of ''C'' configurations. By the [[Dual linear program|LP duality theorem]], the minimum value of this reduced primal LP equals the maximum value of the reduced dual LP, which is at least <math> LOPT-\epsilon</math>.
 
 
 
As a byproduct of this process, we can consider only the constraints that are activated during the run of the ellipsoid method; there are at most ''<math>M</math> ''such constraints. We add to these, the trivial constraints <math>0 \leq y_j \leq 1</math>. All the other constraints have no effect on the computation of the solution '''y*'''. We can eliminate even more constraints by running this procedure iteratively: we choose a set of constraints, remove them, and find an approximate solution of the dual without these constraints. If the new solution is within the tolerance ''t'' of the old solution, then the constraints are dropped; otherwise, they are retained and a new set of constraints is tried. It can be shown that, after sufficiently many attempts, we will get a dual LP with only ''m'' constraints, which has the same (approximate) solution as the original dual LP. The run-time is <math>O\left(m^8 \log{m} \log^2(\frac{m n}{g h}) + \frac{m^4 n \log{m}}{h}\log(\frac{m n}{g h}) \right)</math>. If the constraints to remove are chosen at random, then the run-time becomes <math>O\left(m^7 \log{m} \log^2(\frac{m n}{g h}) + \frac{m^4 n \log{m}}{h}\log(\frac{m n}{g h}) \right)</math>.