Multiplication algorithm: Difference between revisions

Content deleted Content added
Advanced algorithms: +Category:Articles with example pseudocode
a variety of small changes; in the Quarter square method it seems like floor isn't necessary but didn't really change anything since it might be some implicit computer operation thing
Line 2:
{{Lead rewrite|date=August 2022}}
{{Use dmy dates|date=May 2019|cs1-dates=y}}
A '''multiplication algorithm''' is an [[algorithm]] (or method) to [[multiplication|multiply]] two numbers. Depending on the size of the numbers, different algorithms are usedmore efficient than others. Efficient multiplication algorithms have existed since the advent of the decimal system.
 
==Long multiplication==
If a [[numeral system|positional numeral system]] is used, a natural way of multiplying numbers is taught in schools
as '''long multiplication''', sometimes called '''grade-school multiplication''', sometimes called the '''Standard Algorithm''':
multiply the [[wikt:multiplicand|multiplicand]] by each digit of the [[wikt:multiplier|multiplier]] and then add up all the properly shifted results. It requires memorization of the [[multiplication table]] for single digits.
 
Line 227:
</math>
 
If one of {{math|''x''+''y''}} andor {{math|''x''&minus;''y''}} is odd, the other is odd too, thus their squares are 1 mod 4, then taking floor reduces both by a quarter; the subtraction then cancels the quarters out, and discarding the remainders does not introduce any difference comparing with the same expression without the floor functions. Below is a lookup table of quarter squares with the remainder discarded for the digits 0 through 18; this allows for the multiplication of numbers up to {{math|9×9}}.
 
{| border="1" cellspacing="0" cellpadding="3" style="margin:0 0 0 0.5em; background:#fff; border-collapse:collapse; border-color:#7070090;" class="wikitable"
Line 244:
In 1980, Everett L. Johnson proposed using the quarter square method in a [[Digital data|digital]] multiplier.<ref name=eljohnson>{{Citation |last = Everett L. |first = Johnson |date = March 1980 |title = A Digital Quarter Square Multiplier |periodical = IEEE Transactions on Computers |___location = Washington, DC, USA |publisher = IEEE Computer Society |volume = C-29 |issue = 3 |pages = 258–261 |issn = 0018-9340 |doi =10.1109/TC.1980.1675558 |s2cid = 24813486 }}</ref> To form the product of two 8-bit integers, for example, the digital device forms the sum and difference, looks both quantities up in a table of squares, takes the difference of the results, and divides by four by shifting two bits to the right. For 8-bit integers the table of quarter squares will have 2<sup>9</sup>&minus;1=511 entries (one entry for the full range 0..510 of possible sums, the differences using only the first 256 entries in range 0..255) or 2<sup>9</sup>&minus;1=511 entries (using for negative differences the technique of 2-complements and 9-bit masking, which avoids testing the sign of differences), each entry being 16-bit wide (the entry values are from (0²/4)=0 to (510²/4)=65025).
 
The quarter square multiplier technique has also benefittedbenefited 8-bit systems that do not have any support for a hardware multiplier. Charles Putney implemented this for the [[MOS Technology 6502|6502]].<ref name=cputney>{{Cite journal |last = Putney |first = Charles |title = Fastest 6502 Multiplication Yet|date = March 1986 |journal = Apple Assembly Line |volume = 6 |issue = 6 |url = http://www.txbobsc.com/aal/1986/aal8603.html#a5}}</ref>
 
==Computational complexity of multiplication==
Line 252:
A line of research in [[theoretical computer science]] is about the number of single-bit arithmetic operations necessary to multiply two <math>n</math>-bit integers. This is known as the [[computational complexity]] of multiplication. Usual algorithms done by hand have asymptotic complexity of <math>O(n^2)</math>, but in 1960 [[Anatoly Karatsuba]] discovered that better complexity was possible (with the [[Karatsuba algorithm]]).
 
TheCurrently, the algorithm with the best computational complexity is a 2019 algorithm of [[David Harvey (mathematician)|David Harvey]] and [[Joris van der Hoeven]], which uses the strategies of using [[number-theoretic transform]]s introduced with the [[Schönhage–Strassen algorithm]] to multiply integers using only <math>O(n\log n)</math> operations.<ref>{{cite journal | last1 = Harvey | first1 = David | last2 = van der Hoeven | first2 = Joris | author2-link = Joris van der Hoeven | doi = 10.4007/annals.2021.193.2.4 | issue = 2 | journal = [[Annals of Mathematics]] | mr = 4224716 | pages = 563–617 | series = Second Series | title = Integer multiplication in time <math>O(n \log n)</math> | volume = 193 | year = 2021| s2cid = 109934776 | url = https://hal.archives-ouvertes.fr/hal-02070778v2/file/nlogn.pdf }}</ref> This is conjectured to be the best possible algorithm, but lower bounds of <math>\Omega(n\log n)</math> are not known.
 
===Karatsuba multiplication===