Content deleted Content added
Notcharizard (talk | contribs) m Undid revision 1080785986 by 103.89.67.54 (talk) |
Citation bot (talk | contribs) Altered pages. Formatted dashes. | Use this bot. Report bugs. | Suggested by Abductive | Category:All articles needing examples | #UCB_Category 419/867 |
||
(42 intermediate revisions by 27 users not shown) | |||
Line 2:
In [[computer science]], '''divide and conquer''' is an [[algorithm design paradigm]]. A divide-and-conquer [[algorithm]] [[Recursion (computer science)|recursively]] breaks down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.
The divide-and-conquer technique is the basis of efficient algorithms for many problems, such as [[sorting algorithm|sorting]] (e.g., [[quicksort]], [[merge sort]]), [[multiplication algorithm|multiplying large numbers]] (e.g., the [[Karatsuba algorithm]]), finding the [[Closest pair of points problem|closest pair of points]], [[syntactic analysis]] (e.g., [[top-down parser]]s), and computing the [[discrete Fourier transform]] ([[fast Fourier transform|FFT]]).<ref>{{cite book |last1=Blahut |first1=Richard |author1-link=Richard Blahut |title=Fast Algorithms for Signal Processing |date=14 May 2014 |publisher=Cambridge University Press |isbn=978-0-511-77637-3 |pages=139–143}}</ref>
Designing efficient divide-and-conquer algorithms can be difficult. As in [[mathematical induction]], it is often necessary to generalize the problem to make it amenable to a [[Recursion (computer science)|recursive]] solution. The correctness of a divide-and-conquer algorithm is usually proved by mathematical induction, and its computational cost is often determined by solving [[recurrence relation]]s.
== Divide and conquer ==
[[File:Merge sort algorithm diagram.svg|thumb|Divide-and-conquer approach to sort the list (38, 27, 43, 3, 9, 82, 10) in increasing order. ''Upper half:'' splitting into sublists; ''mid:'' a one-element list is trivially sorted; ''lower half:'' composing sorted sublists.]]
The divide-and-conquer [[Programming paradigm|paradigm]] is often used to find an optimal solution of a problem. Its basic idea is to decompose a given problem into two or more similar, but simpler, subproblems, to solve them in turn, and to compose their solutions to solve the given problem. Problems of sufficient simplicity are solved directly.
For example, to sort a given list of ''n'' [[Natural number|natural numbers]], split it into two lists of about ''n''/2 numbers each, sort each of them in turn, and interleave both results appropriately to obtain the sorted version of the given list (see the picture). This approach is known as the [[merge sort]] algorithm.
The name "divide and conquer" is sometimes applied to algorithms that reduce each problem to only one sub-problem, such as the [[binary search]] algorithm for finding a record in a sorted list (or its
An important application of divide and conquer is in optimization,{{Examples|date=October 2017}} where if the search space is reduced ("pruned") by a constant factor at each step, the overall algorithm has the same asymptotic complexity as the pruning step, with the constant depending on the pruning factor (by summing the [[geometric series]]); this is known as [[prune and search]].
== Early historical examples ==
Early examples of these algorithms are primarily
[[Binary search algorithm|Binary search]], a decrease-and-conquer algorithm where the subproblems are of roughly half the original size, has a long history. While a clear description of the algorithm on computers appeared in 1946 in an article by [[John Mauchly]], the idea of using a sorted list of items to facilitate searching dates back at least as far as [[Babylonia]] in 200 BC.<ref name=Knuth3/> Another ancient decrease-and-conquer algorithm is the [[Euclidean algorithm]] to compute the [[greatest common divisor]] of two numbers by reducing the numbers to smaller and smaller equivalent subproblems, which dates to several centuries BC.
An early example of a divide-and-conquer algorithm with multiple subproblems is [[Carl Friedrich Gauss|Gauss]]'s 1805 description of what is now called the [[Cooley–Tukey FFT algorithm|Cooley–Tukey fast Fourier transform]] (FFT) algorithm,<ref name=Heideman84>Heideman, M. T., D. H. Johnson, and C. S. Burrus, "[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.331.4791&rep=rep1&type=pdf Gauss and the history of the fast Fourier transform]", IEEE ASSP Magazine, 1, (4), 14–21 (1984).</ref> although he did not [[analysis of algorithms|analyze its operation count]] quantitatively, and FFTs did not become widespread until they were rediscovered over a century later.
An early two-subproblem D&C algorithm that was specifically developed for computers and properly analyzed is the [[merge sort]] algorithm, invented by [[John von Neumann]] in 1945.<ref>{{ cite book | last=Knuth | first=Donald | author-link=Donald Knuth | year=1998 | title=The Art of Computer Programming: Volume 3 Sorting and Searching | url=https://archive.org/details/artofcomputerpro03knut | url-access=limited | page=[https://archive.org/details/artofcomputerpro03knut/page/159 159] | publisher=Addison-Wesley | isbn=0-201-89685-0 }}</ref>
Another notable example is the [[Karatsuba algorithm|algorithm]] invented by [[Anatolii Alexeevitch Karatsuba|Anatolii A. Karatsuba]] in 1960<ref>{{cite journal| last=Karatsuba | first=Anatolii A. | author-link=Anatolii Alexeevitch Karatsuba |author2=Yuri P. Ofman |author-link2=Yuri Petrovich Ofman | year=1962 | title=Умножение многозначных чисел на автоматах | journal=[[Doklady Akademii Nauk SSSR]] | volume=146 | pages=293–294}} Translated in {{cite journal| title=Multiplication of Multidigit Numbers on Automata | journal=Soviet Physics Doklady | volume=7 | year=1963 | pages=595–596 | bibcode=1963SPhD....7..595K |url={{Google books|MrkOAAAAIAAJ|plainurl=true}} | last1=Karatsuba | first1=A. | last2=Ofman | first2=Yu. }}</ref> that could multiply two ''n''-[[Units of information|digit]] numbers in <math>O(n^{\log_2 3})</math> operations (in [[Big O notation]]). This algorithm disproved [[Andrey Kolmogorov]]'s 1956 conjecture that <math>\Omega(n^2)</math> operations would be required for that task.
As another example of a divide-and-conquer algorithm that did not originally involve computers, [[Donald Knuth]] gives the method a [[post office]] typically uses to route mail: letters are sorted into separate bags for different geographical areas, each of these bags is itself sorted into batches for smaller sub-regions, and so on until they are delivered.<ref name=Knuth3>Donald E. Knuth, ''The Art of Computer Programming: Volume 3, Sorting and Searching'', second edition (Addison-Wesley, 1998).</ref> This is related to a [[radix sort]], described for [[IBM 80 series Card Sorters|punch-card sorting]] machines as early as 1929.<ref name=Knuth3/>
Line 34:
=== Algorithm efficiency ===
The divide-and-conquer paradigm often helps in the discovery of efficient algorithms. It was the key, for example, to [[Karatsuba algorithm|Karatsuba]]'s fast multiplication method, the quicksort and mergesort algorithms, the [[Strassen algorithm]] for [[matrix multiplication]], and fast Fourier transforms.
In all these examples, the D&C approach led to an improvement in the [[asymptotic complexity|asymptotic cost]] of the solution. For example, if (a) the [[Recursion (computer science)|base cases]] have constant-bounded size, the work of splitting the problem and combining the partial solutions is proportional to the problem's size <math>n</math>, and (b) there is a bounded number <math>p</math> of sub-problems of size ~ <math>\frac{n
For other types of divide-and-conquer approaches, running times can also be generalized. For example, when a) the work of splitting the problem and combining the partial solutions take <math>cn</math> time, where <math>n</math> is the input size and <math>c</math> is some constant; b) when <math>n < 2</math>, the algorithm takes time upper-bounded by <math>c</math>, and c) there are <math>q</math> subproblems where each subproblem has size ~ <math>\frac{n}{2}</math>. Then, the running times are as follows:
* if the number of subproblems <math>q > 2</math>, then the divide-and-conquer algorithm's running time is bounded by <math>O(n^{\log_{2}q})</math>.
* if the number of subproblems is exactly one, then the divide-and-conquer algorithm's running time is bounded by <math>O(n)</math>.<ref name="kleinberg&Tardos">{{cite book |last1=Kleinberg |first1=Jon |last2=Tardos |first2=Eva |title=Algorithm Design |date=March 16, 2005 |publisher=[[Pearson Education]] |isbn=9780321295354 |pages=214–220 |edition=1 |url=https://www.pearson.com/en-us/subject-catalog/p/algorithm-design/P200000003259/9780137546350 |access-date=26 January 2025}}</ref>
If, instead, the work of splitting the problem and combining the partial solutions take <math>cn^2</math> time, and there are 2 subproblems where each has size <math>\frac{n}{2}</math>, then the running time of the divide-and-conquer algorithm is bounded by <math>O(n^2)</math>.<ref name="kleinberg&Tardos"/>
=== Parallelism ===
Divide-and-conquer algorithms are naturally adapted for execution in [[Multiprocessing|multi-processor]] machines, especially [[shared-memory]] systems where the communication of data between [[Central processing unit|processors]] does not need to be planned in advance because distinct sub-problems can be executed on different processors.
=== Memory access ===
Divide-and-conquer algorithms naturally tend to make efficient use of [[memory cache]]s. The reason is that once a sub-problem is small enough, it and all its sub-problems can, in principle, be solved within the [[Cache (computing)|cache]], without accessing the slower [[Computer data storage|main memory]]. An algorithm designed to exploit the cache in this way is called ''[[cache-oblivious algorithm|cache-oblivious]]'', because it does not contain the cache size as an explicit [[Parameter (computer programming)|parameter]].<ref name="cahob">{{cite
The same advantage exists with regards to other hierarchical storage systems, such as [[Non-uniform memory access|NUMA]] or [[virtual memory]], as well as for multiple levels of cache: once a sub-problem is small enough, it can be solved within a given level of the hierarchy, without accessing the higher (slower) levels.
Line 65 ⟶ 72:
In any recursive algorithm, there is considerable freedom in the choice of the ''base cases'', the small subproblems that are solved directly in order to terminate the recursion.
Choosing the smallest or simplest possible base cases is more elegant and usually leads to simpler programs, because there are fewer cases to consider and they are easier to solve. For example,
On the other hand, efficiency often improves if the recursion is stopped at relatively large base cases, and these are solved non-recursively, resulting in a [[hybrid algorithm]]. This strategy avoids the overhead of recursive calls that do little or no work and may also allow the use of specialized non-recursive algorithms that, for those base cases, are more efficient than explicit recursion. A general procedure for a simple hybrid recursive algorithm is ''short-circuiting the base case'', also known as ''[[arm's-length recursion]]''. In this case, whether the next step will result in the base case is checked before the function call, avoiding an unnecessary function call. For example, in a tree, rather than recursing to a child node and then checking whether it is null, checking null before recursing; avoids half the function calls in some algorithms on binary trees. Since a D&C algorithm eventually reduces each problem or sub-problem instance to a large number of base instances, these often dominate the overall cost of the algorithm, especially when the splitting/joining overhead is low. Note that these considerations do not depend on whether recursion is implemented by the compiler or by an explicit stack.
Line 71 ⟶ 78:
Thus, for example, many library implementations of quicksort will switch to a simple loop-based [[insertion sort]] (or similar) algorithm once the number of items to be sorted is sufficiently small. Note that, if the empty list were the only base case, sorting a list with <math>n</math> entries would entail maximally <math>n</math> quicksort calls that would do nothing but return immediately. Increasing the base cases to lists of size 2 or less will eliminate most of those do-nothing calls, and more generally a base case larger than 2 is typically used to reduce the fraction of time spent in function-call overhead or stack manipulation.
Alternatively, one can employ large base cases that still use a divide-and-conquer algorithm, but implement the algorithm for predetermined set of fixed sizes where the algorithm can be completely [[loop unwinding|unrolled]] into code that has no recursion, loops, or [[Conditional (programming)|conditionals]] (related to the technique of [[partial evaluation]]). For example, this approach is used in some efficient FFT implementations, where the base cases are unrolled implementations of divide-and-conquer FFT algorithms for a set of fixed sizes.<ref name="fftw">{{cite journal | author = Frigo, M. |author2=Johnson, S. G. | url = http://www.fftw.org/fftw-paper-ieee.pdf | title = The design and implementation of FFTW3 | journal = Proceedings of the IEEE | volume = 93 | issue = 2 |date=February 2005 | pages = 216–231 | doi = 10.1109/JPROC.2004.840301|bibcode=2005IEEEP..93..216F |citeseerx=10.1.1.66.3097 |s2cid=6644892 }}</ref> [[Source-code generation]] methods may be used to produce the large number of separate base cases desirable to implement this strategy efficiently.<ref name="fftw"/>
The generalized version of this idea is known as recursion "unrolling" or "coarsening", and various techniques have been proposed for automating the procedure of enlarging the base case.<ref>Radu Rugina and Martin Rinard, "[http://people.csail.mit.edu/rinard/paper/lcpc00.pdf Recursion unrolling for divide and conquer programs]" in ''Languages and Compilers for Parallel Computing'', chapter 3, pp. 34–48. ''Lecture Notes in Computer Science'' vol. 2017 (Berlin: Springer, 2001).</ref>
===
For some problems, the branched recursion may end up evaluating the same sub-problem many times over. In such cases it may be worth identifying and saving the solutions to these overlapping subproblems, a technique which is commonly known as [[memoization]]. Followed to the limit, it leads to [[bottom-up design|bottom-up]] divide-and-conquer algorithms such as [[dynamic programming
== See also ==
{{Commons category|Divide-and-conquer algorithms}}
*
*
* {{annotated link|Divide and conquer|"Divide and conquer"}}
*
*
*
* {{annotated link|MapReduce}}
*
== References ==
{{Reflist}}{{
{{Authority control}}
|