Karatsuba algorithm: Difference between revisions

Content deleted Content added
Gpear (talk | contribs)
m I've changed the use of the word "classical" to "traditional" in the 3 places I found it. When discussing algorithms, classical usual refers to classical computers versus quantum computers, which is not the intended meaning here. It seems "traditional" is a more fitting word, although just directly referring to it as "long multiplication" might be clearest.
integer specify in lead
 
(45 intermediate revisions by 33 users not shown)
Line 1:
{{short description|Algorithm for integer multiplication}}
[[File:Karatsuba_multiplication.svg|thumb|300px|Karatsuba multiplication of az+b and cz+d (boxed), and 1234 and 567. Magenta arrows denote multiplication, amber denotes addition, silver denotes subtraction and light cyan denotes left shift. (A), (B) and (C) show recursion used to obtain intermediate values.]]
{{Infobox algorithm
The '''Karatsuba algorithm''' is a fast [[multiplication algorithm]]. It was discovered by [[Anatoly Karatsuba]] in 1960 and published in 1962.<ref name="kara1962">
|class = [[Multiplication algorithm]]
|image = <!-- filename only, no "File:" or "Image:" prefix, and no enclosing [[brackets]] -->
|caption =
|data =
|time = <!-- Worst time big-O notation -->
|best-time =
|average-time =
|space = <!-- Worst-case space complexity; auxiliary space
(excluding input) if not specified -->
}}
[[File:Karatsuba_multiplication.svg|thumb|300px|Karatsuba multiplication of az+b and cz+d (boxed), and 1234 and 567 with z=100. Magenta arrows denote multiplication, amber denotes addition, silver denotes subtraction and cyan denotes left shift. (A), (B) and (C) show recursion with z=10 to obtain intermediate values.]]
The '''Karatsuba algorithm''' is a fast [[multiplication algorithm]] for [[Integer|integers]]. It was discovered by [[Anatoly Karatsuba]] in 1960 and published in 1962.<ref name="kara1962">
{{cite journal
| author = A. Karatsuba and Yu. Ofman
Line 8 ⟶ 20:
| year = 1962
| pages = 293–294
| url = https://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=dan&paperid=26729&option_lang=eng
| postscript = . Translation in the academic journal ''[[Physics-Doklady]]'', '''7''' (1963), pp. 595–596}}
</ref><ref name="kara1995">
Line 21 ⟶ 34:
</ref><ref name="knuthV2">
Knuth D.E. (1969) ''[[The Art of Computer Programming]]. v.2.'' Addison-Wesley Publ.Co., 724 pp.
</ref> It is a [[divide-and-conquer algorithm]] that reduces the multiplication of two ''n''-digit numbers to three multiplications of ''n''/2-digit numbers and, by repeating this reduction, to at most <math> n^{\log_23}\approx n^{1.58}</math> single-digit multiplications in general (and exactly <math>n^{\log_23}</math> when ''n'' is a power of 2). It is therefore [[Asymptotic complexity|asymptotically faster]] than the [[long multiplication|traditional]] algorithm, which requiresperforms <math>n^2</math> single-digit products. For example, the Karatsuba algorithm requires 3<sup>10</sup> = 59,049 single-digit multiplications to multiply two 1024-digit numbers (''n'' = 1024 = 2<sup>10</sup>), whereas the traditional algorithm requires (2<sup>10</sup>)<sup>2</sup> = 1,048,576 (a speedup of 17.75 times).
 
The Karatsuba algorithm was the first multiplication algorithm asymptotically faster than the quadratic "grade school" algorithm.
Line 29 ⟶ 42:
The standard procedure for multiplication of two ''n''-digit numbers requires a number of elementary operations proportional to <math>n^2\,\!</math>, or <math>O(n^2)\,\!</math> in [[big-O notation]]. [[Andrey Kolmogorov]] conjectured that the traditional algorithm was ''[[asymptotically optimal]],'' meaning that any algorithm for that task would require <math>\Omega(n^2)\,\!</math> elementary operations.
 
In 1960, Kolmogorov organized a seminar on mathematical problems in [[cybernetics]] at the [[Moscow State University]], where he stated the <math>\Omega(n^2)\,\!</math> conjecture and other problems in the [[Computational complexity theory|complexity of computation]]. Within a week, Karatsuba, then a 23-year-old student, found an algorithm (later it was called "divide and conquer") that multiplies two ''n''-digit numbers in <math>O(n^{\log_2 3})</math> elementary steps, thus disproving the conjecture. Kolmogorov was very excited about the discovery; he communicated it at the next meeting of the seminar, which was then terminated. Kolmogorov gave some lectures on the Karatsuba result at conferences all over the world (see, for example, "Proceedings of the International Congress of Mathematicians 1962", pp.&nbsp; 351–356, and also "6 Lectures delivered at the International Congress of Mathematicians in Stockholm, 1962") and published the method in 1962, in the [[Proceedings of the USSR Academy of Sciences]]. The article had been written by Kolmogorov and contained two results on multiplication, Karatsuba's algorithm and a separate result by [[Yuri Petrovich Ofman|Yuri Ofman]]; it listed "A. Karatsuba and Yu. Ofman" as the authors. Karatsuba only became aware of the paper when he received the reprints from the publisher.<ref name="kara1995"/>
"6 Lectures delivered at the International Congress of Mathematicians in Stockholm, 1962") and published the method in 1962, in the [[Proceedings of the USSR Academy of Sciences]]. The article had been written by Kolmogorov and contained two results on multiplication, Karatsuba's algorithm and a separate result by [[Yuri Petrovich Ofman|Yuri Ofman]]; it listed "A. Karatsuba and Yu. Ofman" as the authors. Karatsuba only became aware of the paper when he received the reprints from the publisher.<ref name="kara1995"/>
 
==Algorithm==
 
===Basic step===
The basic stepprinciple of Karatsuba's algorithm is [[Divide-and-conquer algorithm|divide-and-conquer]], using a formula that allows one to compute the product of two large numbers <math>x</math> and <math>y</math> using three multiplications of smaller numbers, each with about half as many digits as <math>x</math> or <math>y</math>, plus some additions and digit shifts. This basic step is, in fact, a generalization of [[Multiplication algorithm#Complex number multiplication algorithm|a similar complex multiplication algorithm]], where the [[imaginary unit]] {{mvar|i}} is replaced by a power of the [[radix|base]].
 
Let <math>x</math> and <math>y</math> be represented as <math>n</math>-digit strings in some base <math>B</math>. For any positive integer <math>m</math> less than <math>n</math>, one can write the two given numbers as
Line 47 ⟶ 59:
\begin{align}
xy &= (x_1 B^m + x_0)(y_1 B^m + y_0) \\
&= x_1 y_1 B^{2m} + (x_1 y_0 + x_0 y_1) B^m + x_0 y_0 \\
&= z_2 B^{2m} + z_1 B^m + z_0, \\
\end{align}
Line 57 ⟶ 70:
:<math>z_0 = x_0 y_0.</math>
 
These formulae require four multiplications and were known to [[Charles Babbage]].<ref>Charles Babbage, Chapter VIII – Of the Analytical Engine, Larger Numbers Treated, [https://archive.org/details/bub_gb_Fa1JAAAAMAAJ/page/n142 <!-- pg=125 --> Passages from the Life of a Philosopher], Longman Green, London, 1864; page 125.</ref> Karatsuba observed that <math>xy</math> can be computed in only three multiplications, at the cost of a few extra additions. With <math>z_0</math> and <math>z_2</math> as before and <math>z_3=(x_1 + x_0) (y_1 + y_0),</math> one can observe that
:<math>
\begin{align}
z_1 &= x_1 y_0 + x_0 y_1 \\
&= (x_1 + x_0) (y_0 + y_1) - x_1 y_1 - x_0 y_0 \\
&= z_3 - z_2 - z_0. \\
\end{align}
</math>
 
Thus only three multiplications are required for computing <math>z_0, z_1</math> and <math>z_2.</math>
:<math>z_1 = (x_1 + x_0)(y_1 + y_0) - z_2 - z_0.</math>
 
An issue that occurs, however, when computing <math>z_1</math> is that the above computation of <math>(x_1 + x_0)</math> and <math>(y_1 + y_0)</math> may result in overflow (will produce a result in the range <math>B^m \leq \text{result} < 2 B^m</math>), which require a multiplier having one extra bit. This can be avoided by noting that
 
:<math>z_1 = (x_0 - x_1)(y_1 - y_0) + z_2 + z_0.</math>
 
This computation of <math>(x_0 - x_1)</math> and <math>(y_1 - y_0)</math> will produce a result in the range of <math>-B^m < \text{result} < B^m</math>. This method may produce negative numbers, which require one extra bit to encode signedness, and would still require one extra bit for the multiplier. However, one way to avoid this is to record the sign and then use the absolute value of <math>(x_0 - x_1)</math> and <math>(y_1 - y_0)</math> to perform an unsigned multiplication, after which the result may be negated when both signs originally differed. Another advantage is that even though <math>(x_0 - x_1)(y_1 - y_0)</math> may be negative, the final computation of <math>z_1</math> only involves additions.
 
===Example===
Line 77 ⟶ 91:
: ''z''<sub>1</sub> = ('''12''' + '''345''') '''×''' ('''6''' + '''789''') − ''z''<sub>2</sub> − ''z''<sub>0</sub> = 357 '''×''' 795 − 72 − 272205 = 283815 − 72 − 272205 = 11538
 
We get the result by just adding these three partial results, shifted accordingly (and then taking carries into account by decomposing these three inputs in base ''1000'' likeas for the input operands):
: result = ''z''<sub>2</sub> · (''B''<sup>''m''</sup>)<sup>''2''</sup> + ''z''<sub>1</sub> · (''B''<sup>''m''</sup>)<sup>''1''</sup> + ''z''<sub>0</sub> · (''B''<sup>''m''</sup>)<sup>''0''</sup>, i.e.
: result = 72 · ''1000''<sup>2</sup> + 11538 · ''1000'' + 272205 = '''83810205'''.
Line 86 ⟶ 100:
If ''n'' is four or more, the three multiplications in Karatsuba's basic step involve operands with fewer than ''n'' digits. Therefore, those products can be computed by [[recursion|recursive]] calls of the Karatsuba algorithm. The recursion can be applied until the numbers are so small that they can (or must) be computed directly.
 
In a computer with a full 32-bit by 32-bit [[Binary multiplier|multiplier]], for example, one could choose ''B'' = 2<sup>31</sup> = {{val|2,147,483,648}}, and store each digit as a separate 32-bit binary word. Then the sums ''x''<sub>1</sub> + ''x''<sub>0</sub> and ''y''<sub>1</sub> + ''y''<sub>0</sub> will not need an extra binary word for storing the carry-over digit (as in [[carry-save adder]]), and the Karatsuba recursion can be applied until the numbers to multiply are only one- digit long.
 
===[[Time complexity]] analysis===
===Asymmetric Karatsuba-like formulae===
Karatsuba's original formula and other generalizations are themselves symmetric. For example,
the following formula computes
:<math>c(x)=c_4x^4+c_3x^3+c_2x^2+c_1x+c_0=a(x)b(x)=(a_2x^2+a_1x+a_0)(b_2x^2+b_1x+b_0)</math>
with 6 multiplications in <math>{\text{GF}}(2)[x]</math>, where <math>{\text{GF}}(2)</math> is the Galois field with two elements 0 and 1.
 
:<math>\begin{align}
\left\{ {
\begin{array}{l}
c_0 = p_0, \\
c_1 = p_{012} + p_{02} + p_{12} + p_2, \\
c_2 = p_{012} + p_{01} + p_{12}, \\
c_3 = p_{012} + p_{01} + p_{02} + p_0, \\
c_4 = p_2,
\end{array}
}\right.
\end{align}</math>
where <math>p_{i}=a_ib_i, \ \ p_{ij}=(a_i+a_j)(b_i+b_j)</math> and <math>p_{ijk}=(a_i+a_j+a_k)(b_i+b_j+b_k)</math>.
We note that addition and subtraction are the same in fields of characteristic 2.
 
This formula is symmetrical, namely, it does not change if we exchange <math>a</math> and <math>b</math> in <math>p_i, \ \ p_{ij}</math> and <math>p_{ijk}</math>.
 
Based on the second [[Euclidean division|Generalized division algorithms]]
,<ref>Haining Fan, Ming Gu, Jiaguang Sun, Kwok-Yan Lam,"Obtaining More Karatsuba-Like Formulae over the Binary
Field", IET Information security Vol. 6 No. 1 pp. 14-19, 2012.</ref> Fan et al. found the following asymmetric formula:
 
:<math>\begin{align}
\left\{ {
\begin{array}{l}
c_{0}=p_{0} \\
c_{1}=p_{012}+p_{2}+m_{4}+m_{5} \\
c_{2}=p_{012}+m_{3}+m_{5} \\
c_{3}=p_{012}+p_{0}+m_{3}+m_{4} \\
c_{4}=p_{2},
\end{array}
}\right.
\end{align}</math>
where
<math>m_{3}=(a_{1}+a_{2})(b_{0}+b_{2}), \ \ m_{4}=(a_{0}+a_{1})(b_{1}+b_{2})</math> and
<math>m_{5}=(a_{0}+a_{2})(b_{0}+b_{1})</math>.
 
It is asymmetric because we can obtain the following new formula by exchanging <math>a</math> and <math>b</math> in
<math>m_{3}, \ \ m_{4}</math> and <math>m_{5}</math>.
 
:<math>\begin{align}
\left\{ {
\begin{array}{l}
c_{0}=p_{0} \\
c_{1}=p_{012}+p_{2}+m_{4}+m_{5} \\
c_{2}=p_{012}+m_{3}+m_{5} \\
c_{3}=p_{012}+p_{0}+m_{3}+m_{4} \\
c_{4}=p_{2},
\end{array}
}\right.
\end{align}</math>
where
<math>m_{3}=(a_{0}+a_{2})(b_{1}+b_{2}), \ \ m_{4}=(a_{1}+a_{2})(b_{0}+b_{1})</math> and <math>m_{5}=(a_{0}+a_{1})(b_{0}+b_{2})</math>.
 
==Efficiency analysis==
Karatsuba's basic step works for any base ''B'' and any ''m'', but the recursive algorithm is most efficient when ''m'' is equal to ''n''/2, rounded up. In particular, if ''n'' is 2<sup>''k''</sup>, for some integer ''k'', and the recursion stops only when ''n'' is 1, then the number of single-digit multiplications is 3<sup>''k''</sup>, which is ''n''<sup>''c''</sup> where ''c'' = log<sub>2</sub>3.
 
Since one can extend any inputs with zero digits until their length is a power of two, it follows that the number of elementary multiplications, for any ''n'', is at most <math>3^{ \lceil\log_2 n \rceil} \leq 3 n^{\log_2 3}\,\!</math>.
 
Since the additions, subtractions, and digit shifts (multiplications by powers of ''B'') in Karatsuba's basic step take time proportional to ''n'', their cost becomes negligible as ''n'' increases. More precisely, if ''tT''(''n'') denotes the total number of elementary operations that the algorithm performs when multiplying two ''n''-digit numbers, then
 
:<math>T(n) = 3 T(\lceil n/2\rceil) + cn + d</math>
Line 157 ⟶ 113:
for some constants ''c'' and ''d''. For this [[recurrence relation]], the [[master theorem (analysis of algorithms)|master theorem for divide-and-conquer recurrences]] gives the [[big O notation|asymptotic]] bound <math>T(n) = \Theta(n^{\log_2 3})\,\!</math>.
 
It follows that, for sufficiently large ''n'', Karatsuba's algorithm will perform fewer shifts and single-digit additions than longhand multiplication, even though its basic step uses more additions and shifts than the straightforward formula. For small values of ''n'', however, the extra shift and add operations may make it run slower than the longhand method. The point of positive return depends on the [[computer platform]] and context. As a rule of thumb, Karatsuba's method is usually faster when the multiplicands are longer than 320–640 bits.<ref>{{Cite web|url=http://www.cburch.com/proj/karat/comment1.html|title=Karatsuba multiplication|website=www.cburch.com}}</ref>
 
==PseudocodeImplementation==
Here is the pseudocode for this algorithm, using numbers represented in base ten. For the binary representation of integers, it suffices to replace everywhere 10 by 2.<ref>{{cite book |last= Weiss |first= Mark A. |date= 2005 |title= Data Structures and Algorithm Analysis in C++ |___location= |publisher= Addison-Wesley|page= 480|isbn= 0321375319}}</ref>
 
Here is the pseudocode for this algorithm, using numbers represented in base ten. For the binary representation of integers, it suffices to replace everywhere 10 by 2.<ref>{{cite book |last= Weiss |first= Mark A. |date= 2005 |title= Data Structures and Algorithm Analysis in C++ |publisher= Addison-Wesley|page= 480|isbn= 0321375319}}</ref>
<syntaxhighlight lang="C">
 
procedure karatsuba(num1, num2)
The second argument of the split_at function specifies the number of digits to extract from the ''right'': for example, split_at("12345", 3) will extract the 3 final digits, giving: high="12", low="345".
if (num1 < 10) or (num2 < 10)
 
return num1 × num2
<syntaxhighlight lang="c">
function karatsuba(num1, num2)
if (num1 < 10 or num2 < 10)
return num1 × num2 /* fall back to traditional multiplication */
/* Calculates the size of the numbers. */
m = minmax(size_base10(num1), size_base10(num2))
m2 = floor(m / 2)
/* m2 = ceil (m / 2) will also work */
/* Split the digit sequences in the middle. */
Line 176 ⟶ 135:
high2, low2 = split_at(num2, m2)
/* 3 recursive calls made to numbers approximately half the size. */
z0 = karatsuba(low1, low2)
z1 = karatsuba((low1 + high1), (low2 + high2))
z2 = karatsuba(high1, high2)
return (z2 × 10 ^ (m2 × 2)) + ((z1 - z2 - z0) × 10 ^ m2) + z0
</syntaxhighlight>
 
An issue that occurs when implementation is that the above computation of <math>(x_1 + x_0)</math> and <math>(y_1 + y_0)</math> for <math>z_1</math> may result in overflow (will produce a result in the range <math>B^m \leq \text{result} < 2 B^m</math>), which require a multiplier having one extra bit. This can be avoided by noting that
 
:<math>z_1 = (x_0 - x_1)(y_1 - y_0) + z_2 + z_0.</math>
 
This computation of <math>(x_0 - x_1)</math> and <math>(y_1 - y_0)</math> will produce a result in the range of <math>-B^m < \text{result} < B^m</math>. This method may produce negative numbers, which require one extra bit to encode signedness, and would still require one extra bit for the multiplier. However, one way to avoid this is to record the sign and then use the absolute value of <math>(x_0 - x_1)</math> and <math>(y_1 - y_0)</math> to perform an unsigned multiplication, after which the result may be negated when both signs originally differed. Another advantage is that even though <math>(x_0 - x_1)(y_1 - y_0)</math> may be negative, the final computation of <math>z_1</math> only involves additions.
 
==References==