Parameterized complexity: Difference between revisions

Content deleted Content added
Former sentence suggested that NP-hard problems have "exponential time complexity" which is misleading.
m fix external link - use single brackets, not double
Line 1:
In [[computer science]], '''parameterized complexity''' is a measure of complexity of problems with multiple input parameters. The theory of parameterized complexity was developed in the 1990s by Rod Downey and Michael Fellows. Their 1999 [[http://www.springer.com/east/home?SGWID=5-102-22-1519914-0&referer=www.springer.de%2Fcgi-bin%2Fsearch_book.pl%3Fisbn%3D0-387-94883-X&SHORTCUT=www.springer.com/sgw/cda/frontpage/0,11855,5-0-22-1519914-0,00.html monograph]] presents an introduction to the field.
 
The theory of parameterized complexity is motivated, among other things, by the observation that there exist several hard problems that (most likely) require exponential runtime when complexity is measured in terms of the input size only, but that are computable in a time that is polynomial in the input size and exponential in a (small) parameter k. Hence, if k is fixed at a small value, such problems can still be considered `tractable' despite their traditional classification as `intractable'.