Parameterized complexity: Difference between revisions

Content deleted Content added
IvR (talk | contribs)
No edit summary
Former sentence suggested that NP-hard problems have "exponential time complexity" which is misleading.
Line 1:
In [[computer science]], '''parameterized complexity''' is a measure of complexity of problems with multiple input parameters. The theory of parameterized complexity was developed in the 1990s by Rod Downey and Michael Fellows. Their 1999 [[http://www.springer.com/east/home?SGWID=5-102-22-1519914-0&referer=www.springer.de%2Fcgi-bin%2Fsearch_book.pl%3Fisbn%3D0-387-94883-X&SHORTCUT=www.springer.com/sgw/cda/frontpage/0,11855,5-0-22-1519914-0,00.html monograph]] presents an introduction to the field.
 
The theory of parameterized complexity is motivated, among other things, by the observation that there exist several hard problems that are(most oflikely) require exponential-time complexity (or [[NP-hard]]),runtime when complexity is measured in terms of the input size only, but that are computable in a time that is polynomial in the input size and exponential in a (small) parameter, k. Hence, if k is fixed at a small value, such problems can still be considered `tractable' despite their traditional classification as `intractable'.
 
The existence of efficient, exact, and deterministic solving algorithms for [[NP-complete]], or otherwise [[NP-hard]], problems is considered unlikely, if input parameters are not fixed; all known solving algorithms for these problems require time that is [[Exponential time|exponential]] in the total size of the input. However, some problems can be solved by algorithms that are exponential only in the size of a fixed parameter while polynomial in the size of the input size. Such an algorithm is called a [[fixed-parameter tractable]] (fpt-)algorithm, because the problem can be solved efficiently for small values of the fixed parameter.