Content deleted Content added
→FPT: removed another unnecessary big-Ohs |
Improved first paragraph, still not perfect. Copied some wording from computational complexity theory. |
||
Line 1:
'''Parameterized complexity''' is a branch of [[computational complexity theory]] in [[computer science]] that focuses on classifying [[computational problems]] according to their inherent difficulty with respect to ''multiple'' parameters of the input. In parameterized complexity, the complexity of a problem is measured as a [[function]] in two or more parameters of the input. This way, parameterized complexity achieves to classify [[NP-hard]] problems on a finer scale than this is possible in the classical setting, where the complexity of a problem is only measured by the number of bits in the input.
The first systematic work on parameterized complexity was done by {{harvtxt|Downey|Fellows|1999}}.
The existence of efficient, exact, and deterministic solving algorithms for [[NP-complete]], or otherwise [[NP-hard]], problems is considered unlikely, if input parameters are not fixed; all known solving algorithms for these problems require time that is [[Exponential time|exponential]] in the total size of the input. However, some problems can be solved by algorithms that are exponential only in the size of a fixed parameter while polynomial in the size of the input size. Such an algorithm is called a [[fixed-parameter tractable]] (fpt-)algorithm, because the problem can be solved efficiently for small values of the fixed parameter.
|