Content deleted Content added
No edit summary Tags: Reverted Mobile edit Mobile web edit |
m Add master theorem wiki link in see also section |
||
(20 intermediate revisions by 15 users not shown) | |||
Line 1:
{{Short description|Amount of resources to perform an algorithm}}
{{Shordate=December 2017}}▼
{{CS1 config|mode=cs2}}
In [[computer science]], the '''computational complexity''' or simply '''complexity''' of an [[algorithm]] is the amount of resources required to run it. Particular focus is given to [[time complexity|time]] and [[space complexity|memory]] requirements. The complexity of a [[computational problem|problem]] is the complexity of the best algorithms that allow solving the problem.▼
In [[computer science]], the '''computational complexity''' or simply '''complexity''' of an [[algorithm]] is the amount of resources required to run it.<ref>{{citation
| last = Vadhan | first = Salil | author-link = Salil Vadhan
| editor1-last = van Tilborg | editor1-first = Henk C. A.
| editor2-last = Jajodia | editor2-first = Sushil
| contribution = Computational Complexity
| contribution-url = https://people.seas.harvard.edu/~salil/research/ComputationalComplexity-2ndEd.pdf
| doi = 10.1007/978-1-4419-5906-5_442
| isbn = 9781441959065
| pages = 235–240
| publisher = Springer
| title = Encyclopedia of Cryptography and Security
▲
The study of the complexity of explicitly given algorithms is called [[analysis of algorithms]], while the study of the complexity of problems is called [[computational complexity theory]]. Both areas are highly related, as the complexity of an algorithm is always an [[upper bound]] on the complexity of the problem solved by this algorithm. Moreover, for designing efficient algorithms, it is often fundamental to compare the complexity of a specific algorithm to the complexity of the problem to be solved. Also, in most cases, the only thing that is known about the complexity of a problem is that it is lower than the complexity of the most efficient known algorithms. Therefore, there is a large overlap between analysis of algorithms and complexity theory.
As the amount of resources required to run an algorithm generally varies with the size of the input, the complexity is typically expressed as a function {{math|''n'' → ''f''(''n'')}}, where {{math|''n''}} is the size of the input and {{math|''f''(''n'')}} is either the [[worst-case complexity]] (the maximum of the amount of resources that are needed over all inputs of size {{math|''n''}}) or the [[average-case complexity]] (the average of the amount of resources over all inputs of size {{math|''n''}}). [[Time complexity]] is generally expressed as the number of required elementary operations on an input of size {{math|''n''}}, where elementary operations are assumed to take a constant amount of time on a given computer and change only by a constant factor when run on a different computer. [[Space complexity]] is generally expressed as the amount of [[computer memory|memory]] required by an algorithm on an input of size {{math|''n''}}.
Line 27 ⟶ 40:
{{anchor|bit complexity}}
For many algorithms the size of the integers that are used during a computation is not bounded, and it is not realistic to consider that arithmetic operations take a constant time. Therefore, the time complexity, generally called '''bit complexity''' in this context, may be much larger than the arithmetic complexity. For example, the arithmetic complexity of the computation of the [[determinant]] of a {{math|''n''×''n''}} [[integer matrix]] is <math>O(n^3)</math> for the usual algorithms ([[Gaussian elimination]]). The bit complexity of the same algorithms is [[exponential function|exponential]] in {{mvar|n}}, because the size of the coefficients may grow exponentially during the computation. On the other hand, if these algorithms are coupled with [[modular arithmetic|multi-modular arithmetic]], the bit complexity may be reduced to {{math|[[soft O notation|''O''<sup>~</sup>(''n''<sup>4</sup>)]]}}.
In [[sorting]] and [[search algorithm|searching]], the resource that is generally considered is the number of
==Complexity as a function of input size==
Line 46 ⟶ 59:
==Models of computation==
The evaluation of the complexity relies on the choice of a [[model of computation]], which consists in defining the basic operations that are done in a unit of time. When the model of computation is not explicitly specified,
===Deterministic models===
Line 72 ⟶ 85:
==Problem complexity (lower bounds)==
The complexity of a problem is the [[infimum]] of the complexities of the algorithms that may solve the problem{{Citation needed|date=May 2023|reason=}}, including unknown algorithms. Thus the complexity of a problem is not greater than the complexity of any algorithm that solves the problems.
It follows that every complexity of an algorithm, that is expressed with [[big O notation]], is
On the other hand, it is generally hard to obtain nontrivial lower bounds for problem complexity, and there are few methods for obtaining such lower bounds.
Line 97 ⟶ 110:
* [[Computational complexity of mathematical operations]]
* [[Chinese Postman Problem Complexity List]]
* [[Master theorem (analysis of algorithms)]]
==References==
{{Reflist}}
*{{Citation
| last1=Arora | first1=Sanjeev | author-link1=Sanjeev Arora
Line 108 ⟶ 123:
| isbn=978-0-521-42426-4
| zbl=1193.68112
}}
*{{Citation
| last=Calude
| first=Cristian
| author-link=Cristian Calude
| title=Theories of Computational Complexity
| publisher=[[Elsevier]]
| year=1988
| isbn=9780444703569
| pages=487
}}
* {{citation
Line 115 ⟶ 140:
| title=Theory of Computational Complexity
| publisher=[[John Wiley & Sons]]
| issn= 0167-5060
| year=2000
| isbn=978-0-471-34506-0
Line 158 ⟶ 184:
|isbn=0-534-95097-3
}}
{{Computer science}}
[[Category:Analysis of algorithms]]
|