Karmarkar–Karp bin packing algorithms: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Removed URL that duplicated identifier. Removed parameters. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 456/990
 
(28 intermediate revisions by 10 users not shown)
Line 1:
{{short description|Set of related approximation algorithms for the bin packing problem}}
The '''Karmarkar-Karp (KK) bin packing algorithms''' are several related [[approximation algorithm]] for the [[bin packing problem]].<ref name=":12">{{cite journal|last1=Karmarkar|first1=Narendra|last2=Karp|first2=Richard M.|date=November 1982|title=An efficient approximation scheme for the one-dimensional bin-packing problem|url=https://ieeexplore.ieee.org/document/4568405/references#references|journal=23rd Annual Symposium on Foundations of Computer Science (SFCS 1982)|pages=312–320|doi=10.1109/SFCS.1982.61|s2cid=18583908}}</ref> The bin packing problem is a problem of packing items of different sizes into bins of identical capacity, such that the total number of bins is as small as possible. Finding the optimal solution is [[NP-hardness|computationally hard]]. [[Narendra Karmarkar|Karmarkar]] and [[Richard M. Karp|Karp]] devised an algorithm that runs in [[Polynomial-time|polynomial time]] and finds a solution with at most <math>\mathrm{OPT} + \mathcal{O}(\log^2(OPT))</math> bins, where OPT is the number of bins in the optimal solution. They also devised several other algorithms with slightly different approximation guarantees and run-time bounds.
{{Multiple issues|{{refimprove|date=March 2022}}{{more footnotes|date=March 2022}}}}
 
The '''Karmarkar-KarpKarmarkar–Karp''' ('''KK''') '''bin packing algorithms''' are several related [[approximation algorithm]] for the [[bin packing problem]].<ref name=":12">{{cite journalbook|last1=Karmarkar|first1=Narendra|last2=Karp|first2=Richard M.|datetitle=November23rd Annual Symposium on Foundations of Computer Science (SFCS 1982) |titlechapter=An efficient approximation scheme for the one-dimensional bin-packing problem |date=November 1982 |chapter-url=https://ieeexplore.ieee.org/document/4568405/references#references|journal=23rd Annual Symposium on Foundations of Computer Science (SFCS 1982)|pages=312–320|doi=10.1109/SFCS.1982.61|s2cid=18583908|chapter-url-access=subscription}}</ref> The bin packing problem is a problem of packing items of different sizes into bins of identical capacity, such that the total number of bins is as small as possible. Finding the optimal solution is [[NP-hardness|computationally hard]]. [[Narendra Karmarkar|Karmarkar]] and [[Richard M. Karp|Karp]] devised an algorithm that runs in [[Polynomial-time|polynomial time]] and finds a solution with at most <math>\mathrm{OPT} + \mathcal{O}(\log^2(OPT))</math> bins, where OPT is the number of bins in the optimal solution. They also devised several other algorithms with slightly different approximation guarantees and run-time bounds.
 
The KK algorithms were considered a breakthrough in the study of bin packing: the previously-known algorithms found multiplicative approximation, where the number of bins was at most <math>r\cdot \mathrm{OPT}+s</math> for some constants <math>r>1, s>0</math>, or at most <math>(1+\varepsilon)\mathrm{OPT} + 1</math>.<ref name=":0">{{cite journal|last1=Fernandez de la Vega|first1=W.|last2=Lueker|first2=G. S.|date=1981|title=Bin packing can be solved within 1 + ε in linear time|journal=Combinatorica|language=en|volume=1|issue=4|pages=349–355|doi=10.1007/BF02579456|issn=1439-6912|s2cid=10519631}}</ref> The KK algorithms were the first ones to attain an additive approximation.
Line 16 ⟶ 19:
Obviously, FOPT(''I'') ≤ OPT(I).
 
== High-level ideascheme ==
The KK algorithms essentially solve the [[configuration linear program]]:<blockquote><math>\text{minimize}~~\mathbf{1}\cdot \mathbf{x}~~~\text{s.t.}~~ A \mathbf{x}\geq \mathbf{n}~~~\text{and}~~ \mathbf{x}\geq 0~~~\text{and}~~ \mathbf{x}~\text{is an integer}~</math>.</blockquote>Here, '''''A''''' is a matrix with ''m'' rows. Each column of '''''A''''' represents a feasible ''configuration'' - a multiset of item-sizes, such that the sum of all these sizes is at most ''B''. The set of configurations is ''C''. '''x''' is a vector of size ''C.'' Each element ''x<sub>c</sub>'' of '''x''' represents the number of times configuration ''c'' is used.
 
* ''Example'':<ref name=":22">{{Cite web|last=Claire Mathieu|title=Approximation Algorithms Part I, Week 3: bin packing|url=https://www.coursera.org/learn/approximation-algorithms-part-1/home/week/3|url-status=live|website=Coursera}}</ref> suppose the item sizes are 3,3,3,3,3,4,4,4,4,4, and ''B''=12. Then there are ''C''=10 possible configurations: 3333; 333; 33, 334; 3, 34, 344; 4, 44, 444. The matrix '''''A''''' has two rows: [4,3,2,2,1,1,1,0,0,0] for s=3 and [0,0,0,1,0,1,2,1,2,3] for s=4. The vector '''n''' is [5,5] since there are 5 items of each size. A possible optimal solution is '''x'''=[1,0,0,0,0,0,1,0,0,1], corresponding to using three bins with configurations 3333, 344, 444.
 
There are two main difficulties in solving this problem. First, it is an [[Integer programming|integer linear program]], which is computationally hard to solve. Second, the number of variables is ''C'' - the number of configurations, which may be enormous. The KK algorithms cope with these difficulties using several techniques, some of which were already introduced by de-la-Vega and Lueker.<ref name=":0" /> Here is a high-level description of the algorithm (where <math>I</math> is the original instance):
Line 37 ⟶ 40:
 
Now, we add the small items into the existing bins in an arbitrary order, as long as there is room. When there is no more room in the existing bins, we open a new bin (as in [[next-fit bin packing]]). Let <math>b_I</math> be the number of bins in the final packing. Then: <blockquote><math>b_I \leq \max(b_J, (1+2 g)\cdot OPT(I) + 1)</math>.</blockquote>''Proof''. If no new bins are opened, then the number of bins remains <math>b_J</math>. If a new bin is opened, then all bins except maybe the last one contain a total size of at least <math>B - g\cdot B</math>, so the total instance size is at least <math>(1-g)\cdot B\cdot (b_I-1)</math>. Therefore, <math>FOPT \geq (1-g)\cdot (b_I-1)</math>, so the optimal solution needs at least <math>(1-g)\cdot (b_I-1)</math> bins. So <math>b_I \leq OPT/(1-g) + 1 = (1+g+g^2+\ldots) OPT+1 \leq (1+2g) OPT+1</math>.
 
In particular, by taking ''g''=1/''n'', we get:<blockquote><math>b_I \leq \max(b_J, OPT +2 \cdot OPT(I)/n + 1)\leq \max(b_J, OPT +3)</math>,</blockquote>since <math>OPT(I)\leq n</math>. Therefore, it is common to assume that all items are larger than 1/''n''.<ref name=":2" />
 
== Step 2. Grouping and un-grouping items ==
Line 107 ⟶ 112:
The [[dual linear program]] of the fractional LP is:<blockquote><math>\text{maximize}~~\mathbf{n}\cdot \mathbf{y}~~~\text{s.t.}~~ A^T \mathbf{y} \leq \mathbf{1}~~~\text{and}~~ \mathbf{y}\geq 0</math>.</blockquote>It has ''m'' variables <math>y_1,\ldots,y_m</math>, and ''C'' constraints - a constraint for each configuration. It has the following economic interpretation. For each size ''s'', we should determine a nonnegative price ''<math>y_i</math>''. Our profit is the total price of all items. We want to maximize the profit '''n''' '''y''' subject to the constraints that the total price of items in each configuration is at most 1. This LP now has only ''m'' variables, but a huge number of constraints. Even listing all the constraints is infeasible.
 
Fortunately, it is possible to solve the problem up to any given precision without listing all the constraints, by using a variant of the [[ellipsoid method]]. This variant gets as input, a ''[[Separation oracle|''separation oracle]]'']]: a function that, given a vector '''y''' ≥ 0, returns one of the following two options:
 
* Assert that '''y''' is feasible, that is, <math>A^T \mathbf{y} \leq \mathbf{1}</math>; or -
Line 124 ⟶ 129:
* There exists a feasible configuration for which the sum of <math>y_i</math> is larger than 1; this means that '''y''' is infeasible. In this case, we also have to return the configuration.
 
This problem can be solved by solving a ''[[Knapsack problem|''knapsack problem]]'']], where the item values are <math>y_1,\ldots,y_m</math>, the item weights are <math>s_1,\ldots,s_m</math>, and the weight capacity is ''B'' (the bin size).
 
* If the total value of the optimal knapsack solution is at most 1, then we say that '''y''' is feasible.
* If the total value of the optimal knapsack solution is larger than 1, then we say that '''y''' is infeasible, and the items in the optimal knapsack solution correspond to a configuration that violates a constraint (since <math>\mathbf{a}\cdot \mathbf{y} > 1</math> for the vector '''a''' that corresponds to this configuration).
The knapsack problem can be solved by [[dynamic programming]] in [[pseudo-polynomial time]]: <math>O(m\cdot V)</math>, where ''m'' is the number of inputs and ''V'' is the number of different possible values. To get a polynomial-time algorithm, we can solve the knapsack problem approximately, using input rounding. Suppose we want a solution with tolerance <math>\delta</math>. We can round each of <math>y_1,\ldots,y_m</math> down to the nearest multiple of ''<math>\delta</math>''/''n''. Then, the number of possible values between 0 and 1 is ''n''/''<math>\delta</math>'', and the run-time is <math>O(m n /\delta)</math>. The solution is at least the optimal solution minus ''<math>\delta</math>''/''n''.
 
=== Ellipsoid method with an approximate separation oracle ===
Line 134 ⟶ 139:
 
* If the approximate oracle returns a solution with value larger than 1, then <math>\mathbf{y}_t</math> is definitely infeasible, and the solution correspond to a configuration that violates a constraint '''a'''. We do a "feasibility cut" in '''<math>\mathbf{y}_t</math>''', cutting the ellipsoid all points '''y''' for which <math>\mathbf{a}\cdot \mathbf{y} > 1</math>.
* If the approximate oracle returns a solution with value at most 1, then '''<math>\mathbf{y}_t</math>''' may or may not be feasible, but '''<math>\mathbf{y}_t</math>''' rounded down (denote it by '''<math>\mathbf{z}_t</math>''') is feasible. By definition of the rounding, we know that <math>\mathbf{n}\cdot \mathbf{z}_t \geq \mathbf{n}\cdot \mathbf{y}_t - \mathbf{n}\cdot \mathbf{1}\cdot (\delta/n) = \mathbf{n}\cdot \mathbf{y}_t - \delta</math>. We still do an "optimality cut" in '''<math>\mathbf{y}_t</math>''': we cut from the ellipsoid all points '''y''' for which <math>\mathbf{n}\cdot \mathbf{y} < \mathbf{n}\cdot \mathbf{y}_t</math>. Note Forthat <math> \mathbf{y}_t</math> might be infeasible, so its value might be larger than OPT. Therefore, we might remove some points whose objective is optimal. However, the removed points, satisfy <math>\mathbf{n}\cdot \mathbf{y} < \mathbf{n}\cdot \mathbf{z}_t+\delta \leq \text{OPT}+\delta</math>.{{Clarify|date=February Therefore,2022}}; theno remainingpoint pointsis areremoved withinif atits mostvalue exceeds the value at '''<math>\deltamathbf{z}_t</math>'''by ofmore thethan optimal solution<math>\delta</math>.
Using the approximate separation oracle gives a feasible solution '''y*''' to the dual LP, with <math>\mathbf{n}\cdot \mathbf{y^*} \geq LOPT-\delta</math>, after at most ''<math>MQ</math>'' iterations, where <math>MQ = 4m^2 \ln(mn/g\delta)</math>. The total run-time of the ellipsoid method with the approximate separation oracle is <math>O(Q m n /\delta)</math>.
 
=== Eliminating constraints ===
As a byproduct of this process, we can consider only the constraints that are activated during the run of the ellipsoid method; there are at most ''<math>M</math> ''such constraints. We add to these, the trivial constraints <math>0 \leq y_j \leq 1</math>. All the other constraints have no effect on the computation of the solution '''y*'''. We can eliminate even more constraints by running this procedure iteratively: we choose a set of constraints, remove them, and find an approximate solution of the dual without these constraints. If the new solution is within the tolerance ''t'' of the old solution, then the constraints are dropped; otherwise, they are retained and a new set of constraints is tried. It can be shown that, after sufficiently many attempts, we will get a dual LP with only ''m'' constraints, which has the same (approximate) solution as the original dual LP. The run-time is <math>O\left(m^8 \log{m} \log^2(\frac{m n}{g h}) + \frac{m^4 n \log{m}}{h}\log(\frac{m n}{g h}) \right)</math>. If the constraints to remove are chosen at random, then the run-time becomes <math>O\left(m^7 \log{m} \log^2(\frac{m n}{g h}) + \frac{m^4 n \log{m}}{h}\log(\frac{m n}{g h}) \right)</math>.
During the ellipsoid method, we use at most ''Q'' constraints of the form <math>\mathbf{a}\cdot \mathbf{y} \leq 1</math>. All the other constraints can be eliminated, since they have no effect on the outcome '''y*''' of the ellipsoid method. We can eliminate even more constraints. It is known that, in any LP with ''m'' variables, there is a set of ''m'' constraints that is sufficient for determining the optimal solution (that is, the optimal value is the same even if only these ''m'' constraints are used). We can repeatedly run the ellipsoid method as above, each time trying to remove a specific set of constraints. If the resulting error is at most <math>\delta</math>, then we remove these constraints permanently. It can be shown that we need at most <math>\approx (Q/m) + m\ln(Q/m)</math> eliminations, so the accumulating error is at most <math>\approx \delta\cdot [(Q/m) + m\ln(Q/m)]</math>. If we try sets of constraints deterministically, then in the worst case, one out of ''m'' trials succeeds, so we need to run the ellipsoid method at most <math>\approx m\cdot[(Q/m) + m\ln(Q/m)]
= Q+m^2\ln(Q/m)
</math> times. If we choose the constraints to remove at random, then the expected number of iterations is <math> O(m)\cdot[1 + \ln(Q/m)]</math>.
 
Finally, we have a reduced dual LP, with only ''m'' variables and ''m'' constraints. The optimal value of the reduced LP is at least <math> LOPT-h</math>, where <math>h \approx \delta\cdot [(Q/m) + m\ln(Q/m)]</math>.
 
=== Solving the primal LP ===
By the [[Dual linear program|LP duality theorem]], the optimalminimum value of the primal LP equals the optimalmaximum value of the dual LP;, which we denoted thisby optimalLOPT. Once we have a reduced dual LP, we take its dual, and take a reduced primal LP. This LP has only ''m'' variables - corresponding to only ''m'' out of ''C'' configurations. The maximum value byof the reduced dual LP is at least <math> LOPT-h</math>. It can be shown{{Clarify|date=January 2022}} that the optimal solution of the reduced primal LP is at most <math> LOPT+h</math>. The solution gives a near-optimal bin packing, using at most ''m'' configurations.
 
The total run-time of the deterministic algorithm, when all items are larger than ''<math>g\cdot B</math>, is:''
 
<math>O\left(\frac{Q m n}{\delta}
Karmarkar and Karp present an algorithm that, for any tolerance factor ''h'', finds a basic feasible solution of cost at most LOPT(I) + ''h'', and runs in time:
\cdot( Q+m^2\ln\frac{Q}{m})
\right)
=
O
\left(\frac{Q^2 m n + Q m^3 n\ln\frac{Q}{m}}{\delta}
\right)
\approx
<math>O\left(Sm^78 \logln{Sm} \logln^2(\frac{Sm n}{eg h}) + \frac{Sm^4 n \logln{Sm}}{h}\logln(\frac{Sm n}{eg h}) \right)</math>.,
 
The expected total run-time of the [[randomized algorithm]] is: <math>O\left(Sm^87 \log{Sm} \log^2(\frac{Sm n}{eg h}) + \frac{Sm^4 n \log{Sm}}{h}\log(\frac{Sm n}{eg h}) \right)</math>,.
== End-to-end algorithms ==
Karmarkar and Karp presented fourthree algorithms, that use the above techniques with different algorithmsparameters. The run-time of all these algorithms depends on a function <math>T(\cdot,\cdot)</math>, which is a polynomial function describing the time it takes to solve the [[configurationfractional linearLP program]]:with tolerance ''h''=1, which is, for the deterministic version,<math>T(m,n)\in O(m^8\log{m}\log^2{n} + m^4 n \log{m}\log{n} )</math>. The algorithms attain the following guarantees:
 
=== Algorithm 1 ===
where ''S'' is the number of different sizes, ''n'' is the number of different items, and the size of the smallest item is ''eB''. In particular, if ''e'' ≥ 1/''n'' and ''h''=1, the algorithm finds a solution with at most LOPT+1 bins in time: <math>O\left(S^8 \log{S} \log^2{n} + S^4 n \log{S}\log{n} \right)</math>.
Let <math>\epsilon>0</math> be a constant representing the desired approximation accuracy.
*1-a. Set <math>g = \max(1/n, \epsilon/2)</math>. Let <math>J</math> be an instance constructed from <math>I</math> by removing all items smaller than ''g''.
**2-a. Set <math>k = n\cdot \epsilon^2</math>. Let <math>K</math> be an instance constructed from <math>J</math> by linear grouping with parameter ''k'', and let ''<math>K'</math>'' be the remaining instance (the group of ''k'' largest items). Note that <math>m(K) \leq n/k + 1 \approx 1/\epsilon^2</math>.
***3-a. Construct the configuration linear program for <math>K</math>, without the integrality constraints.
****4. Compute a solution '''x''' for <math>K</math>, with tolerance ''h''=1. The result is a fractional bin packing with <math>b_L\leq LOPT(K)+1</math> bins. The run-time is <math>T(m(K),n(K)) \leq T(\epsilon^{-2},n) </math>.
***3-b. Round '''x''' to an integral solution for <math>K</math>. Add at most <math>m(K)/2</math> bins for the fractional part. The total number of bins is <math>b_K\leq b_L + m(K)/2 \leq LOPT(K)+1 + 1/2\epsilon^2</math>.
**2-b. Pack the items in ''<math>K'</math>'' using at most ''k'' bins; get a packing of <math>J</math>. The number of bins is <math>b_J\leq b_K+k \leq LOPT(K)+1 + 1/2\epsilon^2 + n\cdot \epsilon^2</math>.
*1-b. Add the items smaller than ''g'' to get a solution for <math>I</math>. The number of bins is: <math>b_I \leq \max(b_J, (1+2 g)\cdot OPT(I) + 1)\leq (1+\epsilon)OPT(I) + 1/2\epsilon^2 + 3</math>.
 
*All Atin mostall, the number of bins is in <math>(1+\epsilon)\mathrm{OPT} + \mathcal{O}(\epsilon^{-2})</math> bins,and withthe run-time is in <math>O(n\log n + T(\epsilon^{-2},n))</math>,. By wherechoosing <math>\epsilon>0 = OPT^{-1/3}</math> iswe aget constant<math>OPT + O(OPT^{2/3})</math>.
A randomized variant of this algorithm runs in expected time:
 
=== Algorithm 2 ===
<math>O\left(S^7 \log{S} \log^2(\frac{S n}{e h}) + \frac{S^4 n \log{S}}{h}\log(\frac{S n}{e h}) \right)</math>.
Let <math>g>0</math> be a real parameter and <math>k>0</math> an integer parameter to be determined later.
 
*1-a. Let <math>J</math> be an instance constructed from <math>I</math> by removing all items smaller than ''g''.
Their algorithm uses [[separation oracle]] to the dual LP.
*2. While <math>FOPT(J) > 1+\frac{k}{k-1}\ln(1/g)</math> do:
**2-a. Do the Alternative Geometric Grouping with parameter ''k''. Let <math>K</math> be the resulting instance, and let ''<math>K'</math>'' be the remaining instance. We have <math>m(K) \leq FOPT(J)/k + \ln(1/g)</math>.
***3-a. Construct the configuration linear program for <math>K</math>, without the integrality constraints.
****4. Compute a solution '''x''' for <math>K</math>, with tolerance ''h''=1. The result is a fractional bin packing with <math>b_L\leq LOPT(K)+1</math> bins. The run-time is <math>T(m(K),n(K)) \leq T(FOPT(J)/k + \ln(1/g) ,n) </math>.
***3-b. Round '''x''' to an integral solution for <math>K</math>. Do ''not'' add bins for the fractional part. Instead, just remove the packed items from <math>J</math>.
**2-b. Pack the items in ''<math>K'</math>'' in at most ''<math>2 k\cdot (2 + \ln(1/g))</math>'' bins.
*2. Once <math>FOPT(J) \leq 1+\frac{k}{k-1}\ln(1/g)</math>, pack the remaining items greedily into at most <math>2 FOPT(J) \leq 2+\frac{2 k}{k-1}\ln(1/g)</math> bins.
**At each iteration of the loop in step 2, the fractional part of '''x''' has at most ''m''(''K'') patterns, so <math>FOPT(J_{t+1}) \leq m(K_{t}) \leq FOPT(J_{t})/k + \ln(1/g)</math>. The FOPT drops by a factor of ''k'' in each iteration, so the number of iterations is at most <math>\frac{\ln FOPT(I)}{\ln k}+1</math>.
**Therefore, the total number of bins used for <math>J</math> is: <math>b_J \leq OPT(I) + \left[1+\frac{\ln FOPT(I)}{\ln k}\right]\left[1 + 4k + 2k\ln(1/g)\right]+ 2 + \frac{2k}{k-1}\ln(1/g)</math>.
*1-b. Add the items smaller than ''g'' to get a solution for <math>I</math>. The number of bins is: <math>b_I \leq \max(b_J, (1+2 g)\cdot OPT(I) + 1)</math>.
 
The run-time is in <math>O(n\log n + T(FOPT(J)/k + \ln(1/g),n))</math>.
== Guarantees ==
Karmarkar and Karp presented four different algorithms. The run-time of all these algorithms depends on a function <math>T(\cdot,\cdot)</math>, which is a polynomial function describing the time it takes to solve the [[configuration linear program]]: <math>T(m,n)\in O(m^8\log{m}\log^2{n} + m^4 n \log{m}\log{n} )</math>. The algorithms attain the following guarantees:
 
Now, if we choose ''k''=2 and ''g''=1/FOPT(I), we get:<blockquote><math>b_J \leq OPT + O(\log^2(FOPT)) </math>, </blockquote>and hence:<blockquote><math>b_I \leq \max(b_J, OPT+2OPT /FOPT+1) \leq \max(b_J, OPT+5) \in OPT+\log^2(OPT)</math>, </blockquote>so the total number of bins is in <math>OPT + O(\log^2(FOPT))</math>. The run-time is <math>O(n\log n) + T(FOPT/2 + \ln(FOPT) ,n)\in O(n\log{n} + T(FOPT, n)) </math>.
* At most <math>\mathrm{OPT} + \mathcal{O}(\log^2 OPT)</math> bins, with run-time in <math>O(T(FOPT,n))</math>.
 
* At most <math>\mathrm{OPT} + \mathcal{O}(\log^2 m)</math> bins, with run-time in <math>O(T(m,n))</math>.
*The Atsame algorithm can be used with different parameters to trade-off run-time with accuracy. For some parameter <math>\alpha\in(0,1)</math>, choose <math>k=FOPT^{\alpha}</math> and <math>g=1/FOPT^{1-\alpha}</math>. Then, the packing needs at most <math>\mathrm{OPT} + \mathcal{O}(OPT^\alpha)</math> bins, withand the run-time is in <math>O(n\log{n} + T(FOPT^{(1-\alpha)},n))</math>, where <math>\alpha\in(0,1)</math> is a constant.
 
* At most <math>(1+\epsilon)\mathrm{OPT} + \mathcal{O}(\epsilon^{-2})</math> bins, with run-time in <math>O(T(\epsilon^{-2},n))</math>, where <math>\epsilon>0</math> is a constant.
=== Algorithm 3 ===
The third algorithm is useful when the number of sizes ''m'' is small (see also [[high-multiplicity bin packing]]).
*1-a. Set <math>g = \frac{\log^2(m)}{FOPT(I)}</math>. Let <math>K</math> be an instance constructed from <math>I</math> by removing all items smaller than ''g''.
*If <math>m(K)\leq FOPT(K)</math> then:
**3-a. Construct the configuration linear program for <math>K</math>, without the integrality constraints.
***4. Compute a solution '''x''' for <math>K</math>, with tolerance ''h''=1. The result is a fractional bin packing with <math>b_L\leq LOPT(K)+1</math> bins. The run-time is <math>T(m(K),n(K)) \leq T(\epsilon^{-2},n) </math>.
**3-b. Round '''x''' to an integral solution for <math>K</math>. Do ''not'' add bins for the fractional part. Instead, just remove the packed items from <math>K</math>.
* Run step 2 of Algorithm 2 on the remaining pieces.
*1-b. Add the items smaller than ''g'' to get a solution for <math>I</math>. The number of bins is: <math>b_I \leq \max(b_J, (1+2 g)\cdot OPT(I) + 1)</math>.
 
*It Atuses at most <math>\mathrm{OPT} + \mathcal{O}(\log^2 OPTm)</math> bins, withand the run-time is in <math>O(n\log{n} + T(FOPTm,n))</math>.
 
== Improvements ==
The KK techniques were improved later, to provide even better approximations.
The KK techniques were improved later, to provide even better approximations: an algorithm by Rothvoss<ref name=":2">{{Cite journal|last=Rothvoß|first=T.|date=2013-10-01|title=Approximating Bin Packing within O(log OPT * Log Log OPT) Bins|url=https://ieeexplore.ieee.org/document/6686137|journal=2013 IEEE 54th Annual Symposium on Foundations of Computer Science|volume=|pages=20–29|arxiv=1301.4010|doi=10.1109/FOCS.2013.11|isbn=978-0-7695-5135-7|via=|s2cid=15905063}}</ref> uses at most <math>\mathrm{OPT} + O(\log(\mathrm{OPT})\cdot \log\log(\mathrm{OPT}))</math>bins, and an algorithm by Hoberg and Rothvoss<ref name=":3">{{Citation|last1=Hoberg|first1=Rebecca|title=A Logarithmic Additive Integrality Gap for Bin Packing|date=2017-01-01|url=https://epubs.siam.org/doi/abs/10.1137/1.9781611974782.172|work=Proceedings of the 2017 Annual ACM-SIAM Symposium on Discrete Algorithms|pages=2616–2625|series=Proceedings|publisher=Society for Industrial and Applied Mathematics|doi=10.1137/1.9781611974782.172|isbn=978-1-61197-478-2|access-date=2021-02-10|last2=Rothvoss|first2=Thomas|s2cid=1647463}}</ref> uses at most <math>\mathrm{OPT} + O(\log(\mathrm{OPT}))</math> bins.
 
Rothvoss<ref name=":2">{{Cite book|last=Rothvoß|first=T.|title=2013 IEEE 54th Annual Symposium on Foundations of Computer Science |chapter=Approximating Bin Packing within O(log OPT · Log Log OPT) Bins |date=2013-10-01|volume=|pages=20–29|arxiv=1301.4010|doi=10.1109/FOCS.2013.11|isbn=978-0-7695-5135-7|s2cid=15905063}}</ref> uses the same scheme as Algorithm 2, but with a different rounding procedure in Step 2. He introduced a "gluing" step, in which small items are glued together to yield a single larger item. This gluing can be used to increase the smallest item size to about <math>B/\log^{12}(n)</math>. When all sizes are at least <math>B/\log^{12}(n)</math>, we can substitute <math>g = 1/\log^{12}(n)</math> in the guarantee of Algorithm 2, and get:<blockquote><math>b_J \leq OPT(I) + O(\log(FOPT)\log(\log(n)))</math>, </blockquote>which yields a <math>\mathrm{OPT} + O(\log(\mathrm{OPT})\cdot \log\log(\mathrm{OPT}))</math> bins.
 
Hoberg and Rothvoss<ref name=":3">{{Citation|last1=Hoberg|first1=Rebecca|title=A Logarithmic Additive Integrality Gap for Bin Packing|date=2017-01-01|work=Proceedings of the 2017 Annual ACM-SIAM Symposium on Discrete Algorithms|pages=2616–2625|series=Proceedings|publisher=Society for Industrial and Applied Mathematics|doi=10.1137/1.9781611974782.172|isbn=978-1-61197-478-2|last2=Rothvoss|first2=Thomas|s2cid=1647463|doi-access=free|arxiv=1503.08796}}</ref> use a similar scheme in which the items are first packed into "containers", and then the containers are packed into bins. Their algorithm needs at most <math>b_J \leq OPT(I) + O(\log(OPT))</math> bins.
 
== References ==
{{Reflist}}
 
[[Category:Bin packing]]