Content deleted Content added
m Add master theorem wiki link in see also section |
|||
(91 intermediate revisions by 43 users not shown) | |||
Line 1:
{{Short description|Amount of resources to perform an algorithm}}
{{no footnotes|date=December 2017}}▼
{{CS1 config|mode=cs2}}
In [[computer science]], the '''computational complexity''', or simply '''complexity''' of an [[algorithm]] is the amount of resources required for running it. The computational complexity of a problem is the minimum of the complexities of all possible algorithms for this problem (including the unknown algorithms).▼
▲In [[computer science]], the '''computational complexity'''
| last = Vadhan | first = Salil | author-link = Salil Vadhan
| editor1-last = van Tilborg | editor1-first = Henk C. A.
| editor2-last = Jajodia | editor2-first = Sushil
| contribution = Computational Complexity
| contribution-url = https://people.seas.harvard.edu/~salil/research/ComputationalComplexity-2ndEd.pdf
| doi = 10.1007/978-1-4419-5906-5_442
| isbn = 9781441959065
| pages = 235–240
| publisher = Springer
| title = Encyclopedia of Cryptography and Security
| year = 2011}}</ref> Particular focus is given to [[time complexity|computation time]] (generally measured by the number of needed elementary operations) and [[space complexity|memory storage]] requirements. The complexity of a [[computational problem|problem]] is the complexity of the best algorithms that allow solving the problem.
The study of the complexity of explicitly given algorithms is called [[analysis of algorithms]], while the study of the complexity of problems is called [[computational complexity theory]].
▲The study of the complexity of explicitly given algorithms is called [[analysis of algorithms]], while the study of the complexity of problems is called [[computational complexity theory]]. Clearly, both areas are strongly related, as the complexity of an algorithm is always an [[upper bound]] of the complexity of the problem solved by this algorithm.
==Resources==
===Time===
The resource that is most commonly considered is
The usual units of time (seconds, minutes etc.) are not used in [[computational complexity theory|complexity theory]]
===Bit complexity===
{{anchor|bit complexity}}
Formally, the ''bit complexity'' refers to the number of operations on [[bit]]s that are needed for running an algorithm. With most [[models of computation]], it equals the time complexity up to a constant factor. On [[computer]]s, the number of operations on [[machine word]]s that are needed is also proportional to the bit complexity. So, the ''time complexity'' and the ''bit complexity'' are equivalent for realistic models of computation.
===Space===
Another important resource is the size of [[computer memory]] that is needed for running algorithms.
===Communication===
{{Main article|communication complexity}}
For the class of [[distributed computation|distributed algorithms]] that are commonly executed by multiple, interacting parties, the resource that is of most interest is the communication complexity. It is the necessary amount of communication between the executing parties.
===Others===
The number of [[arithmetic operations]] is another resource that is commonly used. In this case, one talks of '''arithmetic complexity'''. If one knows an [[upper bound]] on the size of the [[binary representation]] of the numbers that occur during a computation, the time complexity is generally the product of the arithmetic complexity by a constant factor.
{{anchor|bit complexity}}
For many algorithms the size of the integers that are used during a computation is not bounded, and it is not realistic to consider that arithmetic operations take a constant time. Therefore, the time complexity, generally called
In [[sorting]] and [[search algorithm|searching]], the resource that is generally considered is the number of
==Complexity as a function of input size==
It is impossible to count the number of steps of an algorithm on all possible inputs. As the complexity
The [[worst-case complexity]] is the maximum of the complexity over all inputs of size {{mvar|n}}, and the [[average-case complexity]] is the average of the complexity over all inputs of size {{mvar|n}} (this makes sense, as the number of possible inputs of a given size is finite). Generally, when "complexity" is used without being further specified, this is the worst-case time complexity that is considered.
Line 34 ⟶ 52:
==Asymptotic complexity==
{{see also|Asymptotic computational complexity}}
It is generally difficult to compute precisely the worst-case and the average-case complexity. In addition, these exact values provide little practical application, as any change of computer or of model of computation would change the complexity somewhat. Moreover, the resource use is not critical for small values of {{mvar|n}}, and this makes that, for small {{mvar|n}}, the ease of implementation is generally more interesting than a
For these reasons, one generally focuses on the behavior of the complexity for large {{mvar|n}}, that is on its [[asymptotic analysis|asymptotic behavior]] when {{mvar|n}} tends to the infinity. Therefore, the complexity is generally expressed by using [[big O notation]].
Line 41 ⟶ 59:
==Models of computation==
The evaluation of the complexity relies on the choice of a [[model of computation]], which
===Deterministic models===
A [[deterministic model]] of computation is a model of computation such that the successive states of the machine and the operations to be performed are completely determined by the preceding state. Historically, the first deterministic models were [[μ-recursive function|recursive function]]s, [[lambda calculus]], and [[Turing machine]]s. The model of [[
When the model of computation is not specified, it is generally
===Non-deterministic computation===
In a [[non-deterministic algorithm|non-deterministic model of computation]], such as [[non-deterministic Turing machine]]s, some choices may be done at some steps of the computation. In complexity theory, one considers all possible choices simultaneously, and the non-deterministic time complexity is the time needed, when the best choices are always done. In other words, one considers that the computation is done simultaneously on as many (identical) processors as needed, and the non-deterministic computation time is the time spent by the first processor that finishes the computation. This parallelism is partly amenable to [[quantum computing]] via superposed [[entangled state]]s in running specific [[quantum algorithms]], like e.g. [[Shor's algorithm|Shor's factorization]] of yet only small integers ({{as of|2018|03|lc=yes}}: 21 = 3 × 7).
Even when such a computation model is not realistic yet, it has theoretical importance, mostly related to the [[P = NP]] problem, which questions the identity of the complexity classes formed by taking "polynomial time" and "non-deterministic polynomial time" as least upper bounds. Simulating an NP-algorithm on a deterministic computer usually takes "exponential time". A problem is in the complexity class [[NP (complexity)|NP]], if it
===Parallel and distributed computation===
{{main|Parallel computing|Distributed computing}}
Parallel and distributed computing consist of splitting computation on several processors, which work simultaneously. The difference between the different model lies mainly in the way of transmitting information between processors. Typically, in parallel computing the data transmission between processors is very fast, while, in distributed
The time needed for a computation on {{mvar|N}} processors is at least the quotient by {{mvar|N}} of the time needed by a single processor. In fact this theoretically optimal bound can never be reached, because some subtasks cannot be parallelized, and some processors may have to wait a result from another processor.
Line 62 ⟶ 80:
===Quantum computing===
A [[quantum computer]] is a computer whose model of computation is based on [[quantum mechanics]]. The [[Church–Turing thesis (complexity theory)|Church–Turing thesis]] applies to quantum computers
[[Quantum complexity theory]] has been developed
==Problem complexity (lower bounds)==
The complexity of a problem is the [[infimum]] of the complexities of the algorithms that may solve the problem{{Citation needed|date=May 2023|reason=}}, including
It follows that every complexity of an algorithm, that is expressed with [[big O notation]], is
On the other hand, it is generally hard to obtain nontrivial lower bounds for problem complexity, and there are few methods for obtaining such lower bounds.
Line 77 ⟶ 95:
The solution of some problems, typically in [[computer algebra]] and [[computational algebraic geometry]], may be very large. In such a case, the complexity is lower bounded by the maximal size of the output, since the output must be written. For example, a [[system of polynomial equations|system of {{mvar|n}} polynomial equations of degree {{mvar|d}} in {{mvar|n}} indeterminates]] may have up to <math>d^n</math> [[complex number|complex]] solutions, if the number of solutions is finite (this is [[Bézout's theorem]]). As these solutions must be written down, the complexity of this problem is <math>\Omega(d^n).</math> For this problem, an algorithm of complexity <math>d^{O(n)}</math> is known, which may thus be considered as asymptotically quasi-optimal.
A nonlinear lower bound of <math>\Omega(n\log n)</math> is known for the number of comparisons needed for a [[sorting algorithm]]. Thus the best sorting algorithms are optimal, as their complexity is <math>O(n\log n).</math> This lower bound results from the fact that there are {{math|''n''!}} ways of ordering {{mvar|n}} objects. As each comparison splits in two parts this set of {{math|''n''!}} orders, the number of {{mvar|N}} of comparisons that are needed for distinguishing all orders must verify <math>2^N>n!,</math> which implies <math>N =\Omega(n\log n),</math> by [[Stirling's formula]].
A standard method for getting lower bounds of complexity consists of ''reducing'' a problem to another problem. More precisely, suppose that one may encode a problem {{mvar|A}} of size {{mvar|n}} into a subproblem of size {{math|''f''(''n'')}} of a problem {{mvar|B}}, and that the complexity of {{mvar|A}} is <math>\Omega(g(n)).</math> Without loss of generality, one may suppose that the function {{mvar|f}} increases with {{mvar|n}} and has an [[inverse function]] {{mvar|h}}. Then the complexity of the problem {{mvar|B}} is <math>\Omega(g(h(n))).</math> This is
==Use in algorithm design==
Line 85 ⟶ 103:
Evaluating the complexity of an algorithm is an important part of [[algorithm design]], as this gives useful information on the performance that may be expected.
It is a common misconception that the evaluation of the complexity of algorithms
Thus the evaluation of the complexity may allow eliminating many inefficient algorithms before any implementation. This may also be used for tuning complex algorithms without testing all variants. By determining the most costly steps of a complex
==See also==
* [[Computational complexity of mathematical operations]]
* [[Chinese Postman Problem Complexity List]]
* [[Master theorem (analysis of algorithms)]]
==References==
{{Reflist}}
*{{Citation
| last1=Arora | first1=Sanjeev |
| last2=Barak | first2=Boaz
| title=Computational Complexity: A Modern Approach
Line 102 ⟶ 123:
| isbn=978-0-521-42426-4
| zbl=1193.68112
}}
*{{Citation
| last=Calude
| first=Cristian
| author-link=Cristian Calude
| title=Theories of Computational Complexity
| publisher=[[Elsevier]]
| year=1988
| isbn=9780444703569
| pages=487
}}
* {{citation
Line 109 ⟶ 140:
| title=Theory of Computational Complexity
| publisher=[[John Wiley & Sons]]
| issn= 0167-5060
| year=2000
| isbn=978-0-471-34506-0
Line 116 ⟶ 148:
| last=Goldreich
| first=Oded
|
| url = http://www.wisdom.weizmann.ac.il/~oded/cc-book.html
| title = Computational Complexity: A Conceptual Perspective
Line 134 ⟶ 166:
| last = Papadimitriou
| first = Christos
|
| title = Computational Complexity
| edition = 1st
Line 144 ⟶ 176:
|last=Sipser
|first=Michael
|
|title=[[Introduction to the Theory of Computation]]
|edition=2nd
Line 152 ⟶ 184:
|isbn=0-534-95097-3
}}
{{Computer science}}
[[Category:Analysis of algorithms]]
|