Content deleted Content added
Erel Segal (talk | contribs) No edit summary |
Citation bot (talk | contribs) Removed URL that duplicated identifier. Removed parameters. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox | #UCB_webform_linked 456/990 |
||
(17 intermediate revisions by 10 users not shown) | |||
Line 1:
{{short description|Set of related approximation algorithms for the bin packing problem}}
The '''Karmarkar-Karp (KK) bin packing algorithms''' are several related [[approximation algorithm]] for the [[bin packing problem]].<ref name=":12">{{cite journal|last1=Karmarkar|first1=Narendra|last2=Karp|first2=Richard M.|date=November 1982|title=An efficient approximation scheme for the one-dimensional bin-packing problem|url=https://ieeexplore.ieee.org/document/4568405/references#references|journal=23rd Annual Symposium on Foundations of Computer Science (SFCS 1982)|pages=312–320|doi=10.1109/SFCS.1982.61|s2cid=18583908}}</ref> The bin packing problem is a problem of packing items of different sizes into bins of identical capacity, such that the total number of bins is as small as possible. Finding the optimal solution is [[NP-hardness|computationally hard]]. [[Narendra Karmarkar|Karmarkar]] and [[Richard M. Karp|Karp]] devised an algorithm that runs in [[Polynomial-time|polynomial time]] and finds a solution with at most <math>\mathrm{OPT} + \mathcal{O}(\log^2(OPT))</math> bins, where OPT is the number of bins in the optimal solution. They also devised several other algorithms with slightly different approximation guarantees and run-time bounds.▼
{{Multiple issues|{{refimprove|date=March 2022}}{{more footnotes|date=March 2022}}}}
▲The '''
The KK algorithms were considered a breakthrough in the study of bin packing: the previously-known algorithms found multiplicative approximation, where the number of bins was at most <math>r\cdot \mathrm{OPT}+s</math> for some constants <math>r>1, s>0</math>, or at most <math>(1+\varepsilon)\mathrm{OPT} + 1</math>.<ref name=":0">{{cite journal|last1=Fernandez de la Vega|first1=W.|last2=Lueker|first2=G. S.|date=1981|title=Bin packing can be solved within 1 + ε in linear time|journal=Combinatorica|language=en|volume=1|issue=4|pages=349–355|doi=10.1007/BF02579456|issn=1439-6912|s2cid=10519631}}</ref> The KK algorithms were the first ones to attain an additive approximation.
Line 19 ⟶ 22:
The KK algorithms essentially solve the [[configuration linear program]]:<blockquote><math>\text{minimize}~~\mathbf{1}\cdot \mathbf{x}~~~\text{s.t.}~~ A \mathbf{x}\geq \mathbf{n}~~~\text{and}~~ \mathbf{x}\geq 0~~~\text{and}~~ \mathbf{x}~\text{is an integer}~</math>.</blockquote>Here, '''''A''''' is a matrix with ''m'' rows. Each column of '''''A''''' represents a feasible ''configuration'' - a multiset of item-sizes, such that the sum of all these sizes is at most ''B''. The set of configurations is ''C''. '''x''' is a vector of size ''C.'' Each element ''x<sub>c</sub>'' of '''x''' represents the number of times configuration ''c'' is used.
* ''Example'':<ref name=":22">{{Cite web|last=Claire Mathieu|title=Approximation Algorithms Part I, Week 3: bin packing|url=https://www.coursera.org/learn/approximation-algorithms-part-1/home/week/3
There are two main difficulties in solving this problem. First, it is an [[Integer programming|integer linear program]], which is computationally hard to solve. Second, the number of variables is ''C'' - the number of configurations, which may be enormous. The KK algorithms cope with these difficulties using several techniques, some of which were already introduced by de-la-Vega and Lueker.<ref name=":0" /> Here is a high-level description of the algorithm (where <math>I</math> is the original instance):
Line 109 ⟶ 112:
The [[dual linear program]] of the fractional LP is:<blockquote><math>\text{maximize}~~\mathbf{n}\cdot \mathbf{y}~~~\text{s.t.}~~ A^T \mathbf{y} \leq \mathbf{1}~~~\text{and}~~ \mathbf{y}\geq 0</math>.</blockquote>It has ''m'' variables <math>y_1,\ldots,y_m</math>, and ''C'' constraints - a constraint for each configuration. It has the following economic interpretation. For each size ''s'', we should determine a nonnegative price ''<math>y_i</math>''. Our profit is the total price of all items. We want to maximize the profit '''n''' '''y''' subject to the constraints that the total price of items in each configuration is at most 1. This LP now has only ''m'' variables, but a huge number of constraints. Even listing all the constraints is infeasible.
Fortunately, it is possible to solve the problem up to any given precision without listing all the constraints, by using a variant of the [[ellipsoid method]]. This variant gets as input, a ''[[
* Assert that '''y''' is feasible, that is, <math>A^T \mathbf{y} \leq \mathbf{1}</math>; or -
Line 126 ⟶ 129:
* There exists a feasible configuration for which the sum of <math>y_i</math> is larger than 1; this means that '''y''' is infeasible. In this case, we also have to return the configuration.
This problem can be solved by solving a ''[[
* If the total value of the optimal knapsack solution is at most 1, then we say that '''y''' is feasible.
* If the total value of the optimal knapsack solution is larger than 1, then we say that '''y''' is infeasible, and the items in the optimal knapsack solution correspond to a configuration that violates a constraint (since <math>\mathbf{a}\cdot \mathbf{y} > 1</math> for the vector '''a''' that corresponds to this configuration).
The knapsack problem can be solved by [[dynamic programming]] in [[pseudo-polynomial time]]: <math>O(m\cdot V)</math>, where ''m'' is the number of inputs and ''V'' is the number of different possible values. To get a polynomial-time algorithm, we can solve the knapsack problem approximately, using input rounding. Suppose we want a solution with tolerance <math>\delta</math>. We can round each of <math>y_1,\ldots,y_m</math> down to the nearest multiple of ''<math>\delta</math>''/''n''. Then, the number of possible values between 0 and 1 is ''n''/''<math>\delta</math>'', and the run-time is <math>O(m n /\delta)</math>. The solution is at least the optimal solution minus ''<math>\delta</math>''/''n''.
=== Ellipsoid method with an approximate separation oracle ===
Line 136 ⟶ 139:
* If the approximate oracle returns a solution with value larger than 1, then <math>\mathbf{y}_t</math> is definitely infeasible, and the solution correspond to a configuration that violates a constraint '''a'''. We do a "feasibility cut" in '''<math>\mathbf{y}_t</math>''', cutting the ellipsoid all points '''y''' for which <math>\mathbf{a}\cdot \mathbf{y} > 1</math>.
* If the approximate oracle returns a solution with value at most 1, then '''<math>\mathbf{y}_t</math>''' may or may not be feasible, but '''<math>\mathbf{y}_t</math>''' rounded down (denote it by '''<math>\mathbf{z}_t</math>''') is feasible. By definition of the rounding, we know that <math>\mathbf{n}\cdot \mathbf{z}_t \geq \mathbf{n}\cdot \mathbf{y}_t - \mathbf{n}\cdot \mathbf{1}\cdot (\delta/n) = \mathbf{n}\cdot \mathbf{y}_t - \delta</math>. We still do an "optimality cut" in '''<math>\mathbf{y}_t</math>''': we cut from the ellipsoid all points '''y''' for which <math>\mathbf{n}\cdot \mathbf{y} < \mathbf{n}\cdot \mathbf{y}_t</math>. Note
Using the approximate separation oracle gives a feasible solution '''y*''' to the dual LP, with <math>\mathbf{n}\cdot \mathbf{y^*} \geq LOPT-\delta</math>, after at most ''<math>Q</math>'' iterations, where <math>Q = 4m^2 \ln(mn/g\delta)</math>. The total run-time of the ellipsoid method with the approximate separation oracle is <math>O(Q m n /\delta)</math>.
Line 161 ⟶ 164:
O\left(m^8 \ln{m} \ln^2(\frac{m n}{g h}) + \frac{m^4 n \ln{m}}{h}\ln(\frac{m n}{g h}) \right)</math>,
The expected total run-time of the [[randomized algorithm]] is: <math>O\left(m^7 \log{m} \log^2(\frac{m n}{g h}) + \frac{m^4 n \log{m}}{h}\log(\frac{m n}{g h}) \right)</math>.
== End-to-end algorithms ==
Karmarkar and Karp presented three algorithms, that use the above techniques with different parameters. The run-time of all these algorithms depends on a function <math>T(\cdot,\cdot)</math>, which is a polynomial function describing the time it takes to solve the fractional LP with tolerance ''h''=1, which is, for the deterministic version,<math>T(m,n)\in O(m^8\log{m}\log^2{n} + m^4 n \log{m}\log{n} )</math>.
Line 196 ⟶ 199:
Now, if we choose ''k''=2 and ''g''=1/FOPT(I), we get:<blockquote><math>b_J \leq OPT + O(\log^2(FOPT)) </math>, </blockquote>and hence:<blockquote><math>b_I \leq \max(b_J, OPT+2OPT /FOPT+1) \leq \max(b_J, OPT+5) \in OPT+\log^2(OPT)</math>, </blockquote>so the total number of bins is in <math>OPT + O(\log^2(FOPT))</math>. The run-time is <math>O(n\log n) + T(FOPT/2 + \ln(FOPT) ,n)\in O(n\log{n} + T(FOPT, n)) </math>.
The same algorithm can be used with different parameters to trade-off run-time with accuracy. For some parameter <math>\alpha\in(0,1)</math>, choose <math>k=FOPT^{\alpha}</math> and <math>g=1/FOPT^{1-\alpha}</math>. Then, the packing needs at
=== Algorithm 3 ===
Line 213 ⟶ 216:
The KK techniques were improved later, to provide even better approximations.
Rothvoss<ref name=":2">{{Cite
== References ==
|