Content deleted Content added
PetraMagna (talk | contribs) m fix ref error; use conference version |
mNo edit summary |
||
Line 2:
In [[computer science]], '''parameterized complexity''' is a branch of [[computational complexity theory]] that focuses on classifying [[computational problems]] according to their inherent difficulty with respect to ''multiple'' parameters of the input or output. The complexity of a problem is then measured as a [[Function (mathematics)|function]] of those parameters. This allows the classification of [[NP-hard]] problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated in {{harvtxt|Gurevich|Stockmeyer|Vishkin|1984}}. The first systematic work on parameterized complexity was done by {{harvtxt|Downey|Fellows|1999}}.
Under the assumption that [[P versus NP problem|P ≠ NP]], there exist many natural problems that require
The existence of efficient, exact, and deterministic solving algorithms for [[NP-complete]], or otherwise [[NP-hard]], problems is considered unlikely, if input parameters are not fixed; all known solving algorithms for these problems require time that is [[Exponential time|exponential]] (so in particular
Problems in which some parameter {{mvar|k}} is fixed are called parameterized problems. A parameterized problem that allows for such an FPT algorithm is said to be a '''fixed-parameter tractable''' problem and belongs to the class {{sans-serif|FPT}}, and the early name of the theory of parameterized complexity was '''fixed-parameter tractability'''.
|