Content deleted Content added
No edit summary |
No edit summary |
||
Line 1:
A '''multiplication algorithm''' is an [[algorithm]] (or method) to [[multiplication|multiply]] two numbers. Depending on the size of the numbers, different algorithms are in use.
The major advantage of [[number system|positional number system]]s over other systems of writing down numbers is that they facilitate the usual grade-school method of '''long multiplication''': multiply the first number with every digit of the second number and then add up all the properly shifted results. In order to perform this algorithm, one needs to know the products of all possible digits, which is why [[multiplication table]]s have to be memorized. Humans use this algorithm in base 10, while computers employ the same algorithm in base 2. The algorithm is a lot simpler in base 2, since the multiplication table has only 4 entries. Rather than first computing the products, and then adding them all together in a second phase, computers add the products to the results as soon as they are computed. Modern chips implement this algorithm for 32-[[bit]] or 64-[[bit]] numbers in [[hardware]] or in [[microcode]]. To multiply two numbers with ''n'' digits using this method, one needs about ''n''<sup>2</sup> operations. More formally: the time complexity of multiplying two ''n''-digit numbers using long multiplication is [[Big O|O]](''n''<sup>2</sup>).
For systems that need to multiply huge numbers in the range of several thousand digits, such as [[computer algebra system]]s and [[bignum]] libraries, this algorithm is too slow.
Line 14:
# compute (''x''<sub>1</sub> + ''x''<sub>2</sub>)(''y''<sub>1</sub> + ''y''<sub>2</sub>), call the result ''C''
# compute ''C'' - ''A'' - ''B''; this number is equal to ''x''<sub>1</sub>''y''<sub>2</sub> + ''x''<sub>2</sub>''y''<sub>1</sub>.
To compute these three products of ''m''-digit numbers, we can employ the same trick again, effectively using [[recursion]].
If ''T''(''n'') denotes the time it takes to multiply two ''n''-digit numbers with Karatsuba's method, then we can write
:''T''(''n'') = 3 ''T''(''n''/2) + ''n'' + ''c''
for some constant ''c'', and this [[recurrence relation]] can be solved, giving a time complexity of O(''n''<sup>ln(3)/ln(2)</sup>). The number ln(3)/ln(2) is approximately 1.58, so this method is significantly faster than long multiplication. Because of the overhead of recursion, Karatsuba's multiplication is not very fast for small values of ''n''; typical implementations therefore switch to long multiplication if ''n'' is below some threshold.
It is possible to experimentally verify whether a given system uses Karatsuba's method or long multiplication: take your favorite two 100,000 digit numbers, multiply them and measure the time it takes. Then take your favorite two 200,000 digit numbers and measure the time it takes to multiply those. If Karatsuba's method is being used, the second time will be about three times as long as the first; if long multiplication is being used, it will be about four times as long.
There exist even faster algorithms, but they are not used in computer algebra systems and bignum libraries because they are difficult to implement and don't provide speed benefits for the sizes of numbers typically encountered in those systems. These algorithms are based on the [[fast Fourier transform]]. The idea is the following: multiplying two numbers represented as digit strings is virtually the same as computing the [[convolution]] of those two digit strings. Instead of computing a convolution, one can instead first compute the [[discrete Fourier
All the above multiplication algorithms can also be used to multiply [[polynomial]]s.
|