Computational complexity: Difference between revisions

Content deleted Content added
Electro (talk | contribs)
m Mark unsourced statement that the default model is a multitape Turing machine
m Add master theorem wiki link in see also section
 
(8 intermediate revisions by 8 users not shown)
Line 1:
{{Short description|Amount of resources to perform an algorithm}}
{{CS1 config|mode=cs2}}
{{nomore footnotes|date=December 2017}}
In [[computer science]], the '''computational complexity''' or simply '''complexity''' of an [[algorithm]] is the amount of resources required to run it. Particular focus is given to [[time complexity|computation time]] (generally measured by the number of needed elementary operations) and [[space complexity|memory storage]] requirements. The complexity of a [[computational problem|problem]] is the complexity of the best algorithms that allow solving the problem.
In [[computer science]], the '''computational complexity''' or simply '''complexity''' of an [[algorithm]] is the amount of resources required to run it.<ref>{{citation
| last = Vadhan | first = Salil | author-link = Salil Vadhan
| editor1-last = van Tilborg | editor1-first = Henk C. A.
| editor2-last = Jajodia | editor2-first = Sushil
| contribution = Computational Complexity
| contribution-url = https://people.seas.harvard.edu/~salil/research/ComputationalComplexity-2ndEd.pdf
| doi = 10.1007/978-1-4419-5906-5_442
| isbn = 9781441959065
| pages = 235–240
| publisher = Springer
| title = Encyclopedia of Cryptography and Security
In | [[computeryear science]],= the '''computational complexity''' or simply '''complexity''' of an [[algorithm]] is the amount of resources required to run it.2011}}</ref> Particular focus is given to [[time complexity|computation time]] (generally measured by the number of needed elementary operations) and [[space complexity|memory storage]] requirements. The complexity of a [[computational problem|problem]] is the complexity of the best algorithms that allow solving the problem.
 
The study of the complexity of explicitly given algorithms is called [[analysis of algorithms]], while the study of the complexity of problems is called [[computational complexity theory]]. Both areas are highly related, as the complexity of an algorithm is always an [[upper bound]] on the complexity of the problem solved by this algorithm. Moreover, for designing efficient algorithms, it is often fundamental to compare the complexity of a specific algorithm to the complexity of the problem to be solved. Also, in most cases, the only thing that is known about the complexity of a problem is that it is lower than the complexity of the most efficient known algorithms. Therefore, there is a large overlap between analysis of algorithms and complexity theory.
 
As the amount of resources required to run an algorithm generally varies with the size of the input, the complexity is typically expressed as a function {{math|''n'' → ''f''(''n'')}}, where {{math|''n''}} is the size of the input and {{math|''f''(''n'')}} is either the [[worst-case complexity]] (the maximum of the amount of resources that are needed over all inputs of size {{math|''n''}}) or the [[average-case complexity]] (the average of the amount of resources over all inputs of size {{math|''n''}}). [[Time complexity]] is generally expressed as the number of required elementary operations on an input of size {{math|''n''}}, where elementary operations are assumed to take a constant amount of time on a given computer and change only by a constant factor when run on a different computer. [[Space complexity]] is generally expressed as the amount of [[computer memory|memory]] required by an algorithm on an input of size {{math|''n''}}.
Line 28 ⟶ 40:
 
{{anchor|bit complexity}}
For many algorithms the size of the integers that are used during a computation is not bounded, and it is not realistic to consider that arithmetic operations take a constant time. Therefore, the time complexity, generally called '''bit complexity''' in this context, may be much larger than the arithmetic complexity. For example, the arithmetic complexity of the computation of the [[determinant]] of a {{math|''n''×''n''}} [[integer matrix]] is <math>O(n^3)</math> for the usual algorithms ([[Gaussian elimination]]). The bit complexity of the same algorithms is [[exponential function|exponential]] in {{mvar|n}}, because the size of the coefficients may grow exponentially during the computation. On the other hand, if these algorithms are coupled with [[modular arithmetic|multi-modular arithmetic]], the bit complexity may be reduced to {{math|[[soft O notation|''O''<sup>~</sup>(''n''<sup>4</sup>)]]}}.
 
In [[sorting]] and [[search algorithm|searching]], the resource that is generally considered is the number of entry comparisons. This is generally a good measure of the time complexity if data are suitably organized.
Line 47 ⟶ 59:
 
==Models of computation==
The evaluation of the complexity relies on the choice of a [[model of computation]], which consists in defining the basic operations that are done in a unit of time. When the model of computation is not explicitly specified, thisit is generally{{According toimplicitely whom|date=April 2024}} meantassumed as being a [[multitape Turing machine]], since several more realistic models of computation, such as [[random-access machine]]s are asymptotically equivalent for most problems. It is only for very specific and difficult problems, such as [[integer multiplication]] in time <math>O(n\log n),</math> that the explicit definition of the model of computation is required for proofs.
 
===Deterministic models===
Line 98 ⟶ 110:
* [[Computational complexity of mathematical operations]]
* [[Chinese Postman Problem Complexity List]]
* [[Master theorem (analysis of algorithms)]]
 
==References==
{{Reflist}}
*{{Citation
| last1=Arora | first1=Sanjeev | author-link1=Sanjeev Arora
Line 170 ⟶ 184:
|isbn=0-534-95097-3
}}
 
{{Computer science}}
 
[[Category:Analysis of algorithms]]
[[Category:Computational complexity theory]]