Parameterized complexity: Difference between revisions

Content deleted Content added
B52481 (talk | contribs)
m "unbounded fan-in" -> fan-in > 2
No edit summary
Line 2:
In [[computer science]], '''parameterized complexity''' is a branch of [[computational complexity theory]] that focuses on classifying [[computational problems]] according to their inherent difficulty with respect to ''multiple'' parameters of the input or output. The complexity of a problem is then measured as a [[Function (mathematics)|function]] of those parameters. This allows the classification of [[NP-hard]] problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. The first systematic work on parameterized complexity was done by {{harvtxt|Downey|Fellows|1999}}.
 
Under the assumption that [[P versus NP problem|P ≠ NP]], there exist many natural problems that require superpolynomial [[running time]] when complexity is measured in terms of the input size only, but that are computable in a time that is polynomial in the input size and exponential or worse in a parameter {{mvar|k}}. Hence, if {{mvar|k}} is fixed at a small value and the growth of the function over {{mvar|k}} is relatively small then such problems can still be considered "tractable" despite their traditional classification as "intractable".
 
The existence of efficient, exact, and deterministic solving algorithms for [[NP-complete]], or otherwise [[NP-hard]], problems is considered unlikely, if input parameters are not fixed; all known solving algorithms for these problems require time that is [[Exponential time|exponential]] (orso atin leastparticular superpolynomial) in the total size of the input. However, some problems can be solved by algorithms that are exponential only in the size of a fixed parameter while polynomial in the size of the input. Such an algorithm is called a [[fixed-parameter tractable]] (fpt-)algorithm, because the problem can be solved efficiently for small values of the fixed parameter.
 
Problems in which some parameter {{mvar|k}} is fixed are called parameterized problems. A parameterized problem that allows for such an fpt-algorithm is said to be a '''fixed-parameter tractable''' problem and belongs to the class {{sans-serif|FPT}}, and the early name of the theory of parameterized complexity was '''fixed-parameter tractability'''.
 
Many problems have the following form: given an object {{mvar|x}} and a nonnegative integer {{mvar|k}}, does {{mvar|x}} have some property that depends on {{mvar|k}}? For instance, for the [[vertex cover problem]], the parameter can be the number of vertices in the cover. In many applications, for example when modelling error correction, one can assume the parameter to be "small" compared to the total input size. Then it is challenging to find an algorithm whichthat is exponential ''only'' in {{mvar|k}}, and not in the input size.
 
In this way, parameterized complexity can be seen as ''two-dimensional'' complexity theory. This concept is formalized as follows:
Line 16:
:A parameterized problem {{mvar|L}} is ''fixed-parameter tractable'' if the question "<math>(x, k) \in L</math>?" can be decided in running time <math>f(k) \cdot |x|^{O(1)}</math>, where {{mvar|f}} is an arbitrary function depending only on {{mvar|k}}. The corresponding complexity class is called '''FPT'''.
 
For example, there is an algorithm whichthat solves the vertex cover problem in <math>O(kn + 1.274^k)</math> time,<ref>{{harvnb|Chen|Kanj|Xia|2006}}</ref> where {{mvar|n}} is the number of vertices and {{mvar|k}} is the size of the vertex cover. This means that vertex cover is fixed-parameter tractable with the size of the solution as the parameter.
 
== Complexity classes ==
 
=== FPT ===
FPT contains the ''fixed parameter tractable'' problems, which are those that can be solved in time <math>f(k) \cdot {|x|}^{O(1)}</math> for some computable function {{mvar|f}}. Typically, this function is thought of as single exponential, such as <math>2^{O(k)}</math>, but the definition admits functions that grow even faster. This is essential for a large part of the early history of this class. The crucial part of the definition is to exclude functions of the form <math>f(n,k)</math>, such as <math>n^k^n</math>. The class '''FPL''' (fixed parameter linear) is the class of problems solvable in time <math>f(k) \cdot |x|</math> for some computable function {{mvar|f}}.<ref>{{harvtxt|Grohe|1999}}</ref> FPL is thus a subclass of FPT.
 
An example is the [[Boolean satisfiability]] problem, parameterised by the number of variables. A given formula of size {{mvar|m}} with {{mvar|k}} variables can be checked by brute force in time <math>O(2^km)</math>. A [[vertex cover]] of size {{mvar|k}} in a graph of order {{mvar|n}} can be found in time <math>O(2^kn)</math>, so thisthe vertex cover problem is also in FPT.
 
An example of a problem that is thought not to be in FPT is [[graph coloring]] parameterised by the number of colors. It is known that 3-coloring is [[NP-hard]], and an algorithm for graph {{mvar|k}}-colouring in time <math>f(k)n^{O(1)}</math> for <math>k=3</math> would run in polynomial time in the size of the input. Thus, if graph coloring parameterised by the number of colors were in FPT, then [[P versus NP problem|P&nbsp;=&nbsp;NP]].
 
There are a number of alternative definitions of FPT. For example, the running -time requirement can be replaced by <math>f(k) + |x|^{O(1)}</math>. Also, a parameterised problem is in FPT if it has a so-called kernel. [[Kernelization]] is a preprocessing technique that reduces the original instance to its "hard kernel", a possibly much smaller instance that is equivalent to the original instance but has a size that is bounded by a function in the parameter.
 
FPT is closed under a parameterised notion of [[Reduction (complexity)|reductions]] called '''''fpt-reductions'''''. Such reductions transform an instance <math>(x,k)</math> of some problem into an equivalent instance <math>(x',k')</math> of another problem (with <math>k' \leq g(k)</math>) and can be computed in time <math>f(k)\cdot p(|x|)</math> where <math>p</math> is a polynomial.
Line 60:
* Question: Does the formula have a satisfying assignment of [[Hamming weight]] exactly {{mvar|k}}?
 
It can be shown that for <math>t\geq2</math> the problem Weighted {{mvar|t}}-Normalize SAT is complete for <math>W[t]</math> under fpt-reductions.<ref>{{cite journal |last1=Buss |first1=Jonathan F |last2=Islam |first2=Tarique |title=Simplifying the weft hierarchy |journal=[[Theoretical Computer Science (journal)|Theoretical Computer Science]] |year=2006 |volume=351 |number=3 |pages=303–313 |doi=10.1016/j.tcs.2005.10.002|doi-access=free }}</ref>
Here, '''Weighted {{mvar|t}}-Normalize SAT''' is the following problem:
 
Line 86:
 
=== A hierarchy ===
The '''A hierarchy''' is a collection of computational complexity classes similar to the W hierarchy. However, while the W hierarchy is a hierarchy contained in NP, the A hierarchy more closely mimics the [[polynomial-time hierarchy]] from classical complexity. It is known that A[1] = W[1] holds.
 
== Notes ==
Line 100:
|last3 = Xia
|title = Improved Parameterized Upper Bounds for Vertex Cover
|journal = [[International Symposium on Mathematical Foundations of Computer Science|MFCS]] 2006
|pages = 238–249
|year = 2006
Line 189:
|citeseerx = 10.1.1.25.9250
}}
* [[The Computer Journal]]. Volume 51, Numbers 1 and 3 (2008). [https://web.archive.org/web/20051125014940/http://comjnl.oxfordjournals.org/ The Computer Journal]. Special Double Issue on Parameterized Complexity with 15 survey articles, book review, and a Foreword by Guest Editors R. Downey, M. Fellows and M. Langston.
 
== External links ==