Computational complexity: Difference between revisions

Content deleted Content added
Quantum computing: some wording adjustments
Line 82:
Evaluating the complexity of an algorithm is an important part of [[algorithm design]], as this gives useful information on the performance that may be expected.
 
It is a common misconception that the evaluation of the complexity of algorithms will become less important as a result of [[Moore's law]], which posits the [[exponential growth]] of the power of modern [[computer]]s. This is wrong because this power increase allows working with large input data ([[big data]]). For example, when one wants to sort alphabetically a list of a few hundreds of entries, such as the [[bibliography]] of a book, any algorithm should work well in less than a second. On the other hand, for a list of a million of entries (the phone numbers of a large town, for example), the elementary algorithms that require <math>O(n^2)</math> comparisons would have to do a trillion of comparisons, which would need around three hours at the speed of 10 million of comparisons per second. On the other hand, the [[quicksort]] and [[merge sort]] require only <math>n\log_2 n</math> comparisons (as average-case complexity for the former, as worst-case complexity for the latter). For {{math|1=''n'' = 1,000,000}}, this gives approximately 30,000,000 comparisons, which maywould beonly donetake in3 lessseconds thanat a10 secondmillion oncomparisons aper powerful computersecond.
 
Thus the evaluation of the complexity may allow eliminating many inefficient algorithms before any implementation. This may also be used for tuning complex algorithms without testing all variants. By determining the most costly steps of a complex algorithm, the study of complexity allows also focusing on these steps the effort for improving the efficiency of an implementation.