Content deleted Content added
No edit summary |
m Reverted edits by 216.20.1.212 to last version by Scovetta |
||
Line 1:
A '''multiplication algorithm''' is an [[algorithm]] (or method) to [[multiplication|multiply]] two numbers. Depending on the size of the numbers, different algorithms are in use.
A major advantage of [[numeral system|positional numeral system]]s over other systems of writing down numbers is that they facilitate the usual grade-school method of '''long multiplication''': multiply the first number with every digit of the second number and then add up all the properly shifted results. In order to perform this algorithm, one needs to know the products of all possible digits, which is why [[multiplication table]]s have to be memorized. Humans use this algorithm in base 10, while computers employ the same algorithm in base 2. The algorithm is a lot simpler in base 2, since the multiplication table has only 4 entries. Rather than first computing the products, and then adding them all together in a second phase, computers add the products to the result as soon as they are computed. Modern chips implement this algorithm for 32-[[bit]] or 64-[[bit]] numbers in [[hardware]] or in [[microcode]]. To multiply two numbers with ''n'' digits using this method, one needs about ''n''<sup>2</sup> operations. More formally: the time complexity of multiplying two ''n''-digit numbers using long multiplication is [[Big O notation|Ο]](''n''<sup>2</sup>).
An old method for multiplication, that does not require multiplication tables, is the [[Peasant multiplication]] algorithm; this is actually a method of multiplication using base 2.
For systems that need to multiply huge numbers in the range of several thousand digits, such as [[computer algebra system]]s and [[bignum]] libraries, this algorithm is too slow.
These systems employ '''Karatsuba multiplication''' which was discovered in 1962 and proceeds as follows: suppose you work in base 10 (unlike most computer implementations) and want to multiply two ''n''-digit numbers ''x'' and ''y'', where ''n'' is even and equal to 2''m'' (if ''n'' is odd instead, or the numbers are not of the same length, this can be corrected by adding zeros at the left end of ''x'' and/or ''y''). We can write
: ''x'' = ''x''<sub>1</sub> 10<sup>''m''</sup> + ''x''<sub>2</sub>
: ''y'' = ''y''<sub>1</sub> 10<sup>''m''</sup> + ''y''<sub>2</sub>
with ''m''-digit numbers ''x''<sub>1</sub>, ''x''<sub>2</sub>, ''y''<sub>1</sub> and ''y''<sub>2</sub>. The product is given by
: ''xy'' = ''x''<sub>1</sub>''y''<sub>1</sub> 10<sup>2''m''</sup> + (''x''<sub>1</sub>''y''<sub>2</sub> + ''x''<sub>2</sub>''y''<sub>1</sub>) 10<sup>''m''</sup> + ''x''<sub>2</sub>''y''<sub>2</sub>
so we need to quickly determine the numbers ''x''<sub>1</sub>''y''<sub>1</sub>, ''x''<sub>1</sub>''y''<sub>2</sub> + ''x''<sub>2</sub>''y''<sub>1</sub> and ''x''<sub>2</sub>''y''<sub>2</sub>. The heart of Karatsuba's method lies in the observation that this can be done with only three rather than four multiplications:
# compute ''x''<sub>1</sub>''y''<sub>1</sub>, call the result ''A''
# compute ''x''<sub>2</sub>''y''<sub>2</sub>, call the result ''B''
# compute (''x''<sub>1</sub> + ''x''<sub>2</sub>)(''y''<sub>1</sub> + ''y''<sub>2</sub>), call the result ''C''
# compute ''C'' - ''A'' - ''B''; this number is equal to ''x''<sub>1</sub>''y''<sub>2</sub> + ''x''<sub>2</sub>''y''<sub>1</sub>.
To compute these three products of ''m''-digit numbers, we can employ the same trick again, effectively using [[recursion]]. Once the numbers are computed, we need to add them together, which takes about ''n'' operations.
If ''T''(''n'') denotes the time it takes to multiply two ''n''-digit numbers with Karatsuba's method, then we can write
:''T''(''n'') = 3 ''T''(''n''/2) + ''cn'' + ''d''
for some constants ''c'' and ''d'', and this [[recurrence relation]] can be solved, giving a time complexity of Θ(''n''<sup>ln(3)/ln(2)</sup>). The number ln(3)/ln(2) is approximately 1.585, so this method is significantly faster than long multiplication. Because of the overhead of recursion, Karatsuba's multiplication is not very fast for small values of ''n''; typical implementations therefore switch to long multiplication if ''n'' is below some threshold.
It is possible to experimentally verify whether a given system uses Karatsuba's method or long multiplication: take your favorite two 100,000 digit numbers, multiply them and measure the time it takes. Then take your favorite two 200,000 digit numbers and measure the time it takes to multiply those. If Karatsuba's method is being used, the second time will be about three times as long as the first; if long multiplication is being used, it will be about four times as long.
Another Method of multiplication is called Toom-Cook or [[Toom3]]. The Toom-Cook method splits each number to be multiplied into multiple parts. Karatsuba is a special case of Toom-Cook using two parts. A three-way Toom-Cook can do a size-N<sup>3</sup> multiplication for the cost of five size-N multiplications, improvement by a factor of 9/5 compared to the Karatsuba method's improvement by a factor of 4/3. Using more parts eventually comes up against the law of diminishing returns.
There exist even faster algorithms, based on the '''[[fast Fourier transform]]'''. The idea, due to [[Volker Strassen|Strassen]] (1968), is the following: multiplying two numbers represented as digit strings is virtually the same as computing the [[convolution]] of those two digit strings. Instead of computing a convolution, one can instead first compute the [[discrete Fourier transform]]s, multiply them entry by entry, and then compute the inverse Fourier transform of the result. (See [[convolution theorem]].) The fastest known method based on this idea was described in 1971 by [[Arnold Schönhage|Schönhage]] and [[Strassen]] ([[Schönhage-Strassen algorithm]]) and has a time complexity of Θ(''n'' ln(''n'') ln(ln(''n''))). The [[GIMPS]] distributed Internet [[prime number|prime]] search project deals with numbers having several million digits and employs a Fourier transform based multiplication algorithm. Using [[number-theoretic transform]]s instead of discrete Fourier transforms should avoid any [[rounding error]] problems by using [[modular arithmetic]] instead of [[complex number]]s.
All the above multiplication algorithms can also be used to multiply [[polynomial]]s.
A simple improvement to the basic recursive multiplication algorithm:
: x·0 = 0
: x·y = x + x(y-1)
where x is an arbitrary quantity, and y is a natural number, is to use:
: x·0 = 0
: x·y = 2x·(y/2), if y is divisible by 2
: x·y = x + 2x·(y/2), if y is not divisible by 2 (using integer division)
The major improvement in this algorithm arises because the number of operations required is O(log y) rather than O(y). For numbers which can be represented directly as computer words a further benefit is that multiplying by 2 is equivalent to an arithmetic shift left, while division by 2 is equivalent to an arithmetic shift right. Clearly the major benefits arise when y is very large in which case it will not be possible to represent it as a single computer word.
This may not help so much for multiplication by [[real number|real]] or [[complex number|complex]] values, but is useful for multiplication of very large integers ("[[bignum]]s") which are supported in some programming languages such as [[ Haskell programming language|Haskell]],
[[Java programming language|Java]], [[Ruby programming language|Ruby]], and
[[Common Lisp]].
See also: [[Strassen algorithm]].
==External links==
* [http://www.swox.com/gmp/manual/Multiplication-Algorithms.html#Multiplication%20Algorithms Multiplication Algorithms used by GMP]
[[Category:Arithmetic]][[Category:Algorithms]]
|