Content deleted Content added
m "unbounded fan-in" -> fan-in > 2 |
No edit summary |
||
Line 2:
In [[computer science]], '''parameterized complexity''' is a branch of [[computational complexity theory]] that focuses on classifying [[computational problems]] according to their inherent difficulty with respect to ''multiple'' parameters of the input or output. The complexity of a problem is then measured as a [[Function (mathematics)|function]] of those parameters. This allows the classification of [[NP-hard]] problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. The first systematic work on parameterized complexity was done by {{harvtxt|Downey|Fellows|1999}}.
Under the assumption that [[P versus NP problem|P ≠ NP]], there exist many natural problems that require superpolynomial [[running time]] when complexity is measured in terms of the input size only
The existence of efficient, exact, and deterministic solving algorithms for [[NP-complete]], or otherwise [[NP-hard]], problems is considered unlikely, if input parameters are not fixed; all known solving algorithms for these problems require time that is [[Exponential time|exponential]] (
Problems in which some parameter {{mvar|k}} is fixed are called parameterized problems. A parameterized problem that allows for such an fpt-algorithm is said to be a '''fixed-parameter tractable''' problem and belongs to the class {{sans-serif|FPT}}, and the early name of the theory of parameterized complexity was '''fixed-parameter tractability'''.
Many problems have the following form: given an object {{mvar|x}} and a nonnegative integer {{mvar|k}}, does {{mvar|x}} have some property that depends on {{mvar|k}}? For instance, for the [[vertex cover problem]], the parameter can be the number of vertices in the cover. In many applications, for example when modelling error correction, one can assume the parameter to be "small" compared to the total input size. Then it is challenging to find an algorithm
In this way, parameterized complexity can be seen as ''two-dimensional'' complexity theory. This concept is formalized as follows:
Line 16:
:A parameterized problem {{mvar|L}} is ''fixed-parameter tractable'' if the question "<math>(x, k) \in L</math>?" can be decided in running time <math>f(k) \cdot |x|^{O(1)}</math>, where {{mvar|f}} is an arbitrary function depending only on {{mvar|k}}. The corresponding complexity class is called '''FPT'''.
For example, there is an algorithm
== Complexity classes ==
=== FPT ===
FPT contains the ''fixed parameter tractable'' problems, which are those that can be solved in time <math>f(k) \cdot {|x|}^{O(1)}</math> for some computable function {{mvar|f}}. Typically, this function is thought of as single exponential, such as <math>2^{O(k)}</math>, but the definition admits functions that grow even faster. This is essential for a large part of the early history of this class. The crucial part of the definition is to exclude functions of the form <math>f(n,k)</math>, such as <math>
An example is the [[Boolean satisfiability]] problem, parameterised by the number of variables. A given formula of size {{mvar|m}} with {{mvar|k}} variables can be checked by brute force in time <math>O(2^km)</math>. A [[vertex cover]] of size {{mvar|k}} in a graph of order {{mvar|n}} can be found in time <math>O(2^kn)</math>, so
An example of a problem that is thought not to be in FPT is [[graph coloring]] parameterised by the number of colors. It is known that 3-coloring is [[NP-hard]], and an algorithm for graph {{mvar|k}}-colouring in time <math>f(k)n^{O(1)}</math> for <math>k=3</math> would run in polynomial time in the size of the input. Thus, if graph coloring parameterised by the number of colors were in FPT, then [[P versus NP problem|P = NP]].
There are a number of alternative definitions of FPT. For example, the running
FPT is closed under a parameterised notion of [[Reduction (complexity)|reductions]] called '''''fpt-reductions'''''. Such reductions transform an instance <math>(x,k)</math> of some problem into an equivalent instance <math>(x',k')</math> of another problem (with <math>k' \leq g(k)</math>) and can be computed in time <math>f(k)\cdot p(|x|)</math> where <math>p</math> is a polynomial.
Line 60:
* Question: Does the formula have a satisfying assignment of [[Hamming weight]] exactly {{mvar|k}}?
It can be shown that for <math>t\geq2</math> the problem Weighted {{mvar|t}}-Normalize SAT is complete for <math>W[t]</math> under fpt-reductions.<ref>{{cite journal |last1=Buss |first1=Jonathan F |last2=Islam |first2=Tarique |title=Simplifying the weft hierarchy |journal=[[Theoretical Computer Science (journal)|Theoretical Computer Science]] |year=2006 |volume=351 |number=3 |pages=303–313 |doi=10.1016/j.tcs.2005.10.002|doi-access=free }}</ref>
Here, '''Weighted {{mvar|t}}-Normalize SAT''' is the following problem:
Line 86:
=== A hierarchy ===
The '''A hierarchy''' is a collection of computational complexity classes similar to the W hierarchy. However, while the W hierarchy is a hierarchy contained in NP, the A hierarchy more closely mimics the [[polynomial-time hierarchy]] from classical complexity. It is known that A[1] = W[1] holds.
== Notes ==
Line 100:
|last3 = Xia
|title = Improved Parameterized Upper Bounds for Vertex Cover
|journal = [[International Symposium on Mathematical Foundations of Computer Science|MFCS]] 2006
|pages = 238–249
|year = 2006
Line 189:
|citeseerx = 10.1.1.25.9250
}}
* [[The Computer Journal]]. Volume 51, Numbers 1 and 3 (2008). [https://web.archive.org/web/20051125014940/http://comjnl.oxfordjournals.org/ The Computer Journal]. Special Double Issue on Parameterized Complexity with 15 survey articles, book review, and a Foreword by Guest Editors R. Downey, M. Fellows and M. Langston.
== External links ==
|