Content deleted Content added
Jorge Stolfi (talk | contribs) →Algorithm: Renamed the terms so that the lowest digit is 0 not 2; renamed X,Y to Z2,Z0; reworded analysis. |
Jorge Stolfi (talk | contribs) Generlal rewording; concrete analysis instead of just big-O; |
||
Line 1:
The '''Karatsuba''' [[multiplication algorithm]], a technique for quickly multiplying large numbers, was discovered by [[Anatolii Alexeevich Karatsuba]] in 1960 and published in the joint paper with Yu. Ofman in 1962.
The [[Toom–Cook multiplication|Toom-Cook algorithm]] is a faster generalization of Karatsuba's. For sufficiently large ''n'', Karatsuba's algorithm is beaten by the [[Schönhage-Strassen algorithm]].
==Algorithm==▼
The Karatsuba algorithm is a notable example of the [[divide and conquer algorithm|divide and conquer]] paradigm, specifically of [[binary splitting algorithm|binary splitting]].
▲==Algorithm==
The basic step of Karatsuba's algorithm is a formula that allows us to compute the product of two large numbers ''x'' and ''y'' using three multiplications of smaller numbers, each with about half as many digits as ''x'' or ''y'', plus some additions and digit shifts.
Line 30 ⟶ 32:
since
:''z''<sub>1</sub> = (''x''<sub>1</sub>''y''<sub>1</sub> + ''x''<sub>1</sub>''y''<sub>0</sub> + ''x''<sub>0</sub>''y''<sub>1</sub> + ''x''<sub>0</sub>''y''<sub>0</sub>) - ''x''<sub>1</sub>''y''<sub>1</sub> - ''x''<sub>0</sub>''y''<sub>0</sub> = ''x''<sub>1</sub>''y''<sub>0</sub> + ''x''<sub>0</sub>''y''<sub>1</sub>
To compute these three products of ''m''-digit numbers, we can [[recursion|recursively]] employ Karatsuba multiplication again, until the numbers become so small that their product can be computed directly. One typically chooses ''B'' so that this condition is verified when the numbers are 1 digit each. In a computer with a full 32-bit by 32-bit [[multiplier circuit|multiplier]], for example, one could choose ''B'' = 2<sup>32</sup> or ''B'' = 10<sup>9</sup>, and store each digit as a separate 32-bit binary word.
Karatsuba's basic step works for any base ''B'' and any ''m'', but the recursive algorithm is most efficient when, and ''m'' is equal to ''n''/2, rounded up. In particular, if ''n'' is 2<sup>''k''</sup>, for some integer ''k'', and the recursion stops only when ''n'' is 1, then the recursive Karatsuba algorithm performs at most 3<sup>''k''</sup> multiplications of 1-digit numbers; that is, ''n''<sup>''c''</sup> where ''c'' = log<sub>2</sub>3. When ''n'' is large, Karatsuba's algorithm is much faster than the grade school method, which requires ''n''<sup>2</sup> single-digit multiplications. If ''n'' = 2<sup>10</sup> = 1024, for example, it requires 3<sup>10</sup> 59,049 instead of (2<sup>10</sup>)<sup>2</sup> = 1,048,576 single-digit multiplies. ▼
Since the additions, subtractions, and digit shifts (multiplications by powers of ''B'') in Karasuba's basic step take time proportional to ''n'', their cost of all becomes negligible as ''n'' increases.▼
==Example==
Line 44 ⟶ 41:
:12 34 = 12 × 10<sup>2</sup> + 34
:56 78 = 56 × 10<sup>2</sup> + 78
:''
:''
:''
:result = ''
==Efficiency analysis==
▲Karatsuba's basic step works for any base ''B'' and any ''m'', but the recursive algorithm is most efficient when, and ''m'' is equal to ''n''/2, rounded up. In particular, if ''n'' is 2<sup>''k''</sup>, for some integer ''k'', and the recursion stops only when ''n'' is 1, then the
Since one can extend any inputs with zero digits until their length is a power of two, it follows that the number of elementary multiplications, for any ''n'', is at most <math>3^{ \lceil\log_2 n \rceil} \leq 3 n^{\log_2 3}</math>.
▲Since the additions, subtractions, and digit shifts (multiplications by powers of ''B'') in Karasuba's basic step take time proportional to ''n'', their cost of all becomes negligible as ''n'' increases. More precisely, if ''t''(''n'') denotes the total number of elementary operations that the algorithm performs when multiplying two ''n''-digit numbers, then we can write
:''
for some constants ''c'' and ''d''. For this [[recurrence relation]], the [[master theorem]] gives the [[big O notation|asymptotic]] bound ''t''(''n'') = Θ(''n''<sup>log(3)/log(2)</sup>).
It follows that, for sufficiently large ''n'', Karatsuba's algorithm will perform ''fewer'' shifts and single-digit additions than longhand multiplication, even though its basic step uses ''more'' additions and shifts than the straightforward formula. For small values of ''n'', however, the extra shift and add operations may make it run slower than the longhand method. The point of positive return depends on the [[computer platform]] and context. As a rule of thumb, Karatsuba is usually faster when ''n'' is 10 or more [http://gmplib.org/manual/Karatsuba-Multiplication.html][http://ozark.hendrix.edu/~burch/proj/karat/comment1.html]
==References==
|