Content deleted Content added
Tag: Reverted |
Reverted 2 edits by 196.190.154.217 (talk): == unec |
||
Line 35:
=== Algorithm efficiency ===
The divide-and-conquer paradigm often helps in the discovery of efficient algorithms. It was the key, for example, to Karatsuba's fast multiplication method, the quicksort and mergesort algorithms, the [[Strassen algorithm]] for matrix multiplication, and fast Fourier transforms.
In all these examples, the D&C approach led to an improvement in the [[asymptotic complexity|asymptotic cost]] of the solution. For example, if (a) the [[Recursion (computer science)|base cases]] have constant-bounded size, the work of splitting the problem and combining the partial solutions is proportional to the problem's size <math>n</math>, and (b) there is a bounded number <math>p</math> of sub-problems of size ~ <math>n/p</math> at each stage, then the cost of the divide-and-conquer algorithm will be <math>O(n\log_{p}n)</math>.
=== Parallelism ===
Line 40 ⟶ 42:
=== Memory access ===
Divide-and-conquer algorithms naturally tend to make efficient use of [[memory cache]]s. The reason is that once a sub-problem is small enough, it and all its sub-problems can, in principle, be solved within the cache, without accessing the slower main memory. An algorithm designed to exploit the cache in this way is called ''[[cache-oblivious algorithm|cache-oblivious]]'', because it does not contain the cache size as an explicit parameter.<ref name="cahob">{{cite journal | author = M. Frigo |author2=C. E. Leiserson |author3=H. Prokop | title = Cache-oblivious algorithms | journal = Proc. 40th Symp. On the Foundations of Computer Science |pages=285–297 | year = 1999|url=https://dspace.mit.edu/bitstream/handle/1721.1/80568/43558192-MIT.pdf;sequence=2|doi=10.1109/SFFCS.1999.814600 |isbn=0-7695-0409-4 |s2cid=62758836 }}</ref> Moreover, D&C algorithms can be designed for important algorithms (e.g., sorting, FFTs, and matrix multiplication) to be ''optimal'' cache-oblivious algorithms–they use the cache in a probably optimal way, in an asymptotic sense, regardless of the cache size. In contrast, the traditional approach to exploiting the cache is ''blocking'', as in [[loop nest optimization]], where the problem is explicitly divided into chunks of the appropriate size—this can also use the cache optimally, but only when the algorithm is tuned for the specific cache sizes of a particular machine.
The same advantage exists with regards to other hierarchical storage systems, such as [[Non-uniform memory access|NUMA]] or [[virtual memory]], as well as for multiple levels of cache: once a sub-problem is small enough, it can be solved within a given level of the hierarchy, without accessing the higher (slower) levels.
=== Roundoff control ===
|