Content deleted Content added
→Karatsuba multiplication: linkify Anatoly Karatsuba |
m task, replaced: IEEE Trans. Signal Processing → IEEE Trans. Signal Process. |
||
Line 166:
In base 2, long multiplication reduces to a nearly trivial operation. For each '1' bit in the [[wikt:multiplier|multiplier]], shift the [[wikt:multiplicand|multiplicand]] an appropriate amount and then sum the shifted values. Depending on computer processor architecture and choice of multiplier, it may be faster to code this algorithm using hardware bit shifts and adds rather than depend on multiplication instructions, when the multiplier is fixed and the number of adds required is small.
This [[algorithm]] is also known as peasant multiplication, because it has been widely used among those who are classified as peasants and thus have not memorized the [[multiplication table]]s required by long multiplication.<ref>{{Cite web|url=https://www.cut-the-knot.org/Curriculum/Algebra/PeasantMultiplication.shtml|title=Peasant Multiplication|author-link=Alexander Bogomolny|last=Bogomolny|first= Alexander |website=www.cut-the-knot.org|access-date=2017-11-04}}</ref> The algorithm was also in use in ancient Egypt.<ref>{{Cite book |author=D. Wells |year=1987 |page=44 |title=The Penguin Dictionary of Curious and Interesting Numbers |publisher=Penguin Books}}</ref>
On paper, write down in one column the numbers you get when you repeatedly halve the multiplier, ignoring the remainder; in a column beside it repeatedly double the multiplicand. Cross out each row in which the last digit of the first number is even, and add the remaining numbers in the second column to obtain the product.
Line 310:
This algorithm uses only three multiplications, rather than four, and five additions or subtractions rather than two. If a multiply is more expensive than three adds or subtracts, as when calculating by hand, then there is a gain in speed. On modern computers a multiply and an add can take about the same time so there may be no speed gain. There is a trade-off in that there may be some loss of precision when using floating point.
For [[fast Fourier transform]]s (FFTs) (or any [[Linear map|linear transformation]]) the complex multiplies are by constant coefficients ''c'' + ''di'' (called [[twiddle factor]]s in FFTs), in which case two of the additions (''d''−''c'' and ''c''+''d'') can be precomputed. Hence, only three multiplies and three adds are required.<ref>P. Duhamel and M. Vetterli, [http://math.berkeley.edu/~strain/273.F10/duhamel.vetterli.fft.review.pdf Fast Fourier transforms: A tutorial review and a state of the art"] {{webarchive|url=https://web.archive.org/web/20140529212847/http://math.berkeley.edu/~strain/273.F10/duhamel.vetterli.fft.review.pdf |date=2014-05-29 }}, ''Signal Processing'' vol. 19, pp. 259–299 (1990), section 4.1.</ref> However, trading off a multiplication for an addition in this way may no longer be beneficial with modern [[floating-point unit]]s.<ref>S. G. Johnson and M. Frigo, "[http://fftw.org/newsplit.pdf A modified split-radix FFT with fewer arithmetic operations]," ''IEEE Trans. Signal
===Karatsuba multiplication===
Line 328:
Karatsuba's algorithm is the first known algorithm for multiplication that is asymptotically faster than long multiplication,<ref>D. Knuth, ''The Art of Computer Programming'', vol. 2, sec. 4.3.3 (1998)</ref> and can thus be viewed as the starting point for the theory of fast multiplications.
In 1963, Peter Ungar suggested setting ''m'' to ''i'' to obtain a similar reduction in the complex multiplication algorithm.<ref name="taocp-vol2-sec464-ex41"/>
# compute ''b'' · ''d'', call the result ''F''
# compute ''a'' · ''c'', call the result ''G''
|