Multiplication algorithm: Difference between revisions

Content deleted Content added
m Added link to Wikipedia page on numeral system.
No edit summary
Tag: Reverted
Line 24:
 
====Other notations====
In some countries such as [[Germany]], the above multiplication is depicted similarly but with the original product kept horizontal and computation starting with the first digit of the multiplier:<ref>{{Cite web |title=Multiplication |url=httphttps://www.mathematische-basteleien.de/multiplication.htm |access-date=2022-03-15 |website=www.mathematische-basteleien.de}}</ref>
 
23958233 · 5830
Line 70:
===Grid method===
{{main|Grid method multiplication}}
The [[grid method multiplication|grid method]] (or box method) is an introductory method for multiple-digit multiplication that is often taught to pupils at [[primary school]] or [[elementary school]]. It has been a standard part of the national primary school mathematics curriculum in England and Wales since the late 1990s.<ref>{{cite news |first=Gary |last=Eason |url=httphttps://news.bbc.co.uk/1/hi/education/639937.stm |title=Back to school for parents |publisher=[[BBC News]] |date=13 February 2000}}<br>{{cite news |first=Rob |last=Eastaway |author-link=Rob Eastaway |url=https://www.bbc.co.uk/news/magazine-11258175 |title=Why parents can't do maths today |publisher=BBC News |date=10 September 2010}}</ref>
 
Both factors are broken up ("partitioned") into their hundreds, tens and units parts, and the products of the parts are then calculated explicitly in a relatively simple multiplication-only stage, before these contributions are then totalled to give the final answer in a separate addition stage.
Line 107:
[[File:Hindu lattice 2.svg|thumb|right|Finally, sum along the diagonal tracts and carry as needed to get the answer]]
 
Lattice, or sieve, multiplication is algorithmically equivalent to long multiplication. It requires the preparation of a lattice (a grid drawn on paper) which guides the calculation and separates all the multiplications from the [[addition]]s. It was introduced to Europe in 1202 in [[Fibonacci]]'s [[Liber Abaci]]. Fibonacci described the operation as mental, using his right and left hands to carry the intermediate calculations. [[Matrakçı Nasuh]] presented 6 different variants of this method in this 16th-century book, Umdet-ul Hisab. It was widely used in [[Enderun]] schools across the Ottoman Empire.<ref>{{cite journal |last1=Corlu |first1=M.S. |last2=Burlbaw |first2=L.M. |last3=Capraro |first3=R.M. |last4=Corlu |first4=M.A. |last5=Han |first5=S. |title=The Ottoman Palace School Enderun and the Man with Multiple Talents, Matrakçı Nasuh |journal=Journal of the Korea Society of Mathematical Education Series D: Research in Mathematical Education |volume=14 |issue=1 |pages=19–31 |date=2010 |doi= |url=httpshttpss://koreascience.kr/article/JAKO201017337333137.page}}</ref> [[Napier's bones]], or [[Napier's rods]] also used this method, as published by Napier in 1617, the year of his death.
 
As shown in the example, the multiplicand and multiplier are written above and to the right of a lattice, or a sieve. It is found in [[Muhammad ibn Musa al-Khwarizmi]]'s "Arithmetic", one of Leonardo's sources mentioned by Sigler, author of "Fibonacci's Liber Abaci", 2002.{{citation needed|date=January 2016}}
Line 163:
===Russian peasant multiplication===
{{Main|Peasant multiplication}}
The binary method is also known as peasant multiplication, because it has been widely used by people who are classified as peasants and thus have not memorized the [[multiplication table]]s required for long multiplication.<ref>{{Cite web|url=https://www.cut-the-knot.org/Curriculum/Algebra/PeasantMultiplication.shtml|title=Peasant Multiplication|author-link=Alexander Bogomolny|last=Bogomolny|first= Alexander |website=www.cut-the-knot.org|access-date=2017-11-04}}</ref>{{failed verification|date=March 2020}} The algorithm was in use in ancient Egypt.<ref>{{Cite book |first=D. |last=Wells | author-link=David G. Wells | year=1987 |page=44 |title=The Penguin Dictionary of Curious and Interesting Numbers |publisher=Penguin Books |isbn=978-0-14-008029-2}}</ref> Its main advantages are that it can be taught quickly, requires no memorization, and can be performed using tokens, such as [[poker chips]], if paper and pencil aren't available. The disadvantage is that it takes more steps than long multiplication, so it can be unwieldy for large numbers.
 
====Description====
Line 250:
====History of quarter square multiplication====
 
In prehistoric time, quarter square multiplication involved [[Floor and ceiling functions|floor function]]; that some sources<ref>{{citation |title= Quarter Tables Revisited: Earlier Tables, Division of Labor in Table Construction, and Later Implementations in Analog Computers |last=McFarland |first=David|url=httphttps://escholarship.org/uc/item/5n31064n |page=1 |year=2007}}</ref><ref>{{cite book| title=Mathematics in Ancient Iraq: A Social History |last=Robson |first=Eleanor |page=227 |year=2008 |isbn= 978-0691091822 }}</ref> attribute to [[Babylonian mathematics]] (2000–1600 BC).
 
Antoine Voisin published a table of quarter squares from 1 to 1000 in 1817 as an aid in multiplication. A larger table of quarter squares from 1 to 100000 was published by Samuel Laundy in 1856,<ref>{{Citation |title=Reviews |journal=The Civil Engineer and Architect's Journal |year=1857 |pages=54–55 |url=httpshttpss://books.google.com/books?id=gcNAAAAAcAAJ&pg=PA54 |postscript=.}}</ref> and a table from 1 to 200000 by Joseph Blater in 1888.<ref>{{Citation|title=Multiplying with quarter squares |first=Neville |last=Holmes| journal=The Mathematical Gazette |volume=87 |issue=509 |year=2003 |pages=296–299 |jstor=3621048|postscript=.|doi=10.1017/S0025557200172778 |s2cid=125040256 }}</ref>
 
Quarter square multipliers were used in [[analog computer]]s to form an [[analog signal]] that was the product of two analog input signals. In this application, the sum and difference of two input [[voltage]]s are formed using [[operational amplifier]]s. The square of each of these is approximated using [[piecewise linear function|piecewise linear]] circuits. Finally the difference of the two squares is formed and scaled by a factor of one fourth using yet another operational amplifier.
Line 258:
In 1980, Everett L. Johnson proposed using the quarter square method in a [[Digital data|digital]] multiplier.<ref name=eljohnson>{{Citation |last = Everett L. |first = Johnson |date = March 1980 |title = A Digital Quarter Square Multiplier |periodical = IEEE Transactions on Computers |___location = Washington, DC, USA |publisher = IEEE Computer Society |volume = C-29 |issue = 3 |pages = 258–261 |issn = 0018-9340 |doi =10.1109/TC.1980.1675558 |s2cid = 24813486 }}</ref> To form the product of two 8-bit integers, for example, the digital device forms the sum and difference, looks both quantities up in a table of squares, takes the difference of the results, and divides by four by shifting two bits to the right. For 8-bit integers the table of quarter squares will have 2<sup>9</sup>&minus;1=511 entries (one entry for the full range 0..510 of possible sums, the differences using only the first 256 entries in range 0..255) or 2<sup>9</sup>&minus;1=511 entries (using for negative differences the technique of 2-complements and 9-bit masking, which avoids testing the sign of differences), each entry being 16-bit wide (the entry values are from (0²/4)=0 to (510²/4)=65025).
 
The quarter square multiplier technique has benefited 8-bit systems that do not have any support for a hardware multiplier. Charles Putney implemented this for the [[MOS Technology 6502|6502]].<ref name=cputney>{{Cite journal |last = Putney |first = Charles |title = Fastest 6502 Multiplication Yet|date = March 1986 |journal = Apple Assembly Line |volume = 6 |issue = 6 |url = httphttps://www.txbobsc.com/aal/1986/aal8603.html#a5}}</ref>
 
==Computational complexity of multiplication==
Line 266:
A line of research in [[theoretical computer science]] is about the number of single-bit arithmetic operations necessary to multiply two <math>n</math>-bit integers. This is known as the [[computational complexity]] of multiplication. Usual algorithms done by hand have asymptotic complexity of <math>O(n^2)</math>, but in 1960 [[Anatoly Karatsuba]] discovered that better complexity was possible (with the [[Karatsuba algorithm]]).
 
Currently, the algorithm with the best computational complexity is a 2019 algorithm of [[David Harvey (mathematician)|David Harvey]] and [[Joris van der Hoeven]], which uses the strategies of using [[number-theoretic transform]]s introduced with the [[Schönhage–Strassen algorithm]] to multiply integers using only <math>O(n\log n)</math> operations.<ref>{{cite journal | last1 = Harvey | first1 = David | last2 = van der Hoeven | first2 = Joris | author2-link = Joris van der Hoeven | doi = 10.4007/annals.2021.193.2.4 | issue = 2 | journal = [[Annals of Mathematics]] | mr = 4224716 | pages = 563–617 | series = Second Series | title = Integer multiplication in time <math>O(n \log n)</math> | volume = 193 | year = 2021| s2cid = 109934776 | url = httpshttpss://hal.archives-ouvertes.fr/hal-02070778v2/file/nlogn.pdf }}</ref> This is conjectured to be the best possible algorithm, but lower bounds of <math>\Omega(n\log n)</math> are not known.
 
===Karatsuba multiplication===
Line 296:
:<math>z_0 = x_0 y_0.</math>
 
These formulae require four multiplications and were known to [[Charles Babbage]].<ref>Charles Babbage, Chapter VIII – Of the Analytical Engine, Larger Numbers Treated, [httpshttpss://archive.org/details/bub_gb_Fa1JAAAAMAAJ/page/n142 <!-- pg=125 --> Passages from the Life of a Philosopher], Longman Green, London, 1864; page 125.</ref> Karatsuba observed that <math>xy</math> can be computed in only three multiplications, at the cost of a few extra additions. With <math>z_0</math> and <math>z_2</math> as before one can observe that
 
:<math>
Line 397:
==== History ====
 
Algorithm were invented by [[Volker Strassen|Strassen]] (1968). The algorithm was made practical and theoretical guarantees were provided in 1971 by [[Arnold Schönhage|Schönhage]] and Strassen resulting in the [[Schönhage–Strassen algorithm]].<ref name="schönhage">{{cite journal |first1=A. |last1=Schönhage |first2=V. |last2=Strassen |title=Schnelle Multiplikation großer Zahlen |journal=Computing |volume=7 |issue= 3–4|pages=281–292 |date=1971 |doi=10.1007/BF02242355 |s2cid=9738629 |url=httpshttpss://link.springer.com/article/10.1007/BF02242355}}</ref>
 
=== Further improvements ===
 
In 2007 the [[asymptotic complexity]] of integer multiplication was improved by the Swiss mathematician [[Martin Fürer]] of Pennsylvania State University to ''n''&nbsp;log(''n'')&nbsp;2<sup>Θ([[iterated logarithm|log<sup>*</sup>]](''n''))</sup> using Fourier transforms over [[complex number]]s,<ref name="fürer_1">{{cite book |first=M. |last=Fürer |chapter=Faster Integer Multiplication |chapter-url=httpshttpss://ivv5hpp.uni-muenster.de/u/cl/WS2007-8/mult.pdf |doi=10.1145/1250790.1250800 |title=Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, June 11–13, 2007, San Diego, California, USA |publisher= |___location= |date=2007 |isbn=978-1-59593-631-8 |pages=57–66 |s2cid=8437794 |url=}}</ref> where log<sup>*</sup> denotes the [[iterated logarithm]]. Anindya De, Chandan Saha, Piyush Kurur and Ramprasad Saptharishi gave a similar algorithm using [[modular arithmetic]] in 2008 achieving the same running time.<ref>{{cite book |first1=A. |last1=De |first2=C. |last2=Saha |first3=P. |last3=Kurur |first4=R. |last4=Saptharishi |chapter=Fast integer multiplication using modular arithmetic |chapter-url= |doi=10.1145/1374376.1374447 |title=Proceedings of the 40th annual ACM Symposium on Theory of Computing (STOC) |publisher= |___location= |date=2008 |isbn=978-1-60558-047-0 |pages=499–506 |url= |arxiv=0801.1416|s2cid=3264828 }}</ref> In context of the above material, what these latter authors have achieved is to find ''N'' much less than 2<sup>3''k''</sup> + 1, so that ''Z''/''NZ'' has a (2''m'')th root of unity. This speeds up computation and reduces the time complexity. However, these latter algorithms are only faster than Schönhage–Strassen for impractically large inputs.
 
In 2014, Harvey, [[Joris van der Hoeven]] and Lecerf<ref>{{cite journal
Line 416:
| year = 2016}}</ref> gave a new algorithm that achieves a running time of <math>O(n\log n \cdot 2^{3\log^* n})</math>, making explicit the implied constant in the <math>O(\log^* n)</math> exponent. They also proposed a variant of their algorithm which achieves <math>O(n\log n \cdot 2^{2\log^* n})</math> but whose validity relies on standard conjectures about the distribution of [[Mersenne prime]]s. In 2016, Covanov and Thomé proposed an integer multiplication algorithm based on a generalization of [[Fermat primes]] that conjecturally achieves a complexity bound of <math>O(n\log n \cdot 2^{2\log^* n})</math>. This matches the 2015 conditional result of Harvey, van der Hoeven, and Lecerf but uses a different algorithm and relies on a different conjecture.<ref>{{cite journal |first1=Svyatoslav |last1=Covanov |first2=Emmanuel |last2=Thomé |title=Fast Integer Multiplication Using Generalized Fermat Primes |journal=[[Mathematics of Computation|Math. Comp.]] |volume=88 |year=2019 |issue=317 |pages=1449–1477 |doi=10.1090/mcom/3367 |arxiv=1502.02800 |s2cid=67790860 }}</ref> In 2018, Harvey and van der Hoeven used an approach based on the existence of short lattice vectors guaranteed by [[Minkowski's theorem]] to prove an unconditional complexity bound of <math>O(n\log n \cdot 2^{2\log^* n})</math>.<ref>{{cite journal |first1=D. |last1=Harvey |first2=J. |last2=van der Hoeven |year=2019 |title=Faster integer multiplication using short lattice vectors |journal=The Open Book Series |volume=2 |pages=293–310 |doi=10.2140/obs.2019.2.293 |arxiv=1802.07932|s2cid=3464567 }}</ref>
 
In March 2019, [[David Harvey (mathematician)|David Harvey]] and [[Joris van der Hoeven]] announced their discovery of an {{nowrap|''O''(''n'' log ''n'')}} multiplication algorithm.<ref>{{Cite magazine|url=https://www.quantamagazine.org/mathematicians-discover-the-perfect-way-to-multiply-20190411/|title=Mathematicians Discover the Perfect Way to Multiply|last=Hartnett|first=Kevin|magazine=Quanta Magazine|date=11 April 2019|access-date=2019-05-03}}</ref> It was published in the ''[[Annals of Mathematics]]'' in 2021.<ref>{{cite journal | last1 = Harvey | first1 = David | last2 = van der Hoeven | first2 = Joris | author2-link = Joris van der Hoeven | doi = 10.4007/annals.2021.193.2.4 | issue = 2 | journal = [[Annals of Mathematics]] | mr = 4224716 | pages = 563–617 | series = Second Series | title = Integer multiplication in time <math>O(n \log n)</math> | volume = 193 | year = 2021| s2cid = 109934776 | url = httpshttpss://hal.archives-ouvertes.fr/hal-02070778v2/file/nlogn.pdf }}</ref> Because Schönhage and Strassen predicted that ''n''&nbsp;log(''n'') is the ‘best possible’ result Harvey said: "...our work is expected to be the end of the road for this problem, although we don't know yet how to prove this rigorously."<ref>{{cite news |last1=Gilbert |first1=Lachlan |title=Maths whiz solves 48-year-old multiplication problem |url=httpshttpss://newsroom.unsw.edu.au/news/science-tech/maths-whiz-solves-48-year-old-multiplication-problem |access-date=18 April 2019 |publisher=UNSW |date=4 April 2019}}</ref>
 
===Lower bounds===
There is a trivial lower bound of [[Big O notation#Family of Bachmann–Landau notations|Ω]](''n'') for multiplying two ''n''-bit numbers on a single processor; no matching algorithm (on conventional machines, that is on Turing equivalent machines) nor any sharper lower bound is known. Multiplication lies outside of [[ACC0|AC<sup>0</sup>[''p'']]] for any prime ''p'', meaning there is no family of constant-depth, polynomial (or even subexponential) size circuits using AND, OR, NOT, and MOD<sub>''p''</sub> gates that can compute a product. This follows from a constant-depth reduction of MOD<sub>''q''</sub> to multiplication.<ref>{{cite book |first1=Sanjeev |last1=Arora |first2=Boaz |last2=Barak |title=Computational Complexity: A Modern Approach |publisher=Cambridge University Press |date=2009 |isbn=978-0-521-42426-4 |url={{GBurl|8Wjqvsoo48MC|pg=PR7}}}}</ref> Lower bounds for multiplication are also known for some classes of [[branching program]]s.<ref>{{cite journal |first1=F. |last1=Ablayev |first2=M. |last2=Karpinski |title=A lower bound for integer multiplication on randomized ordered read-once branching programs |journal=Information and Computation |volume=186 |issue=1 |pages=78–89 |date=2003 |doi=10.1016/S0890-5401(03)00118-4 |url=httpshttpss://core.ac.uk/download/pdf/82445954.pdf}}</ref>
 
==Complex number multiplication==
Line 449:
This algorithm uses only three multiplications, rather than four, and five additions or subtractions rather than two. If a multiply is more expensive than three adds or subtracts, as when calculating by hand, then there is a gain in speed. On modern computers a multiply and an add can take about the same time so there may be no speed gain. There is a trade-off in that there may be some loss of precision when using floating point.
 
For [[fast Fourier transform]]s (FFTs) (or any [[Linear map|linear transformation]]) the complex multiplies are by constant coefficients ''c''&nbsp;+&nbsp;''di'' (called [[twiddle factor]]s in FFTs), in which case two of the additions (''d''−''c'' and ''c''+''d'') can be precomputed. Hence, only three multiplies and three adds are required.<ref>{{cite journal |first1=P. |last1=Duhamel |first2=M. |last2=Vetterli |title=Fast Fourier transforms: A tutorial review and a state of the art |journal=Signal Processing |volume=19 |issue=4 |pages=259–299 See Section 4.1 |date=1990 |doi=10.1016/0165-1684(90)90158-U |url=httpshttpss://core.ac.uk/download/pdf/147907050.pdf}}</ref> However, trading off a multiplication for an addition in this way may no longer be beneficial with modern [[floating-point unit]]s.<ref>{{cite journal |first1=S.G. |last1=Johnson |first2=M. |last2=Frigo |title=A modified split-radix FFT with fewer arithmetic operations |journal=IEEE Trans. Signal Process. |volume=55 |issue= 1|pages=111–9 See Section IV |date=2007 |doi=10.1109/TSP.2006.882087 |bibcode=2007ITSP...55..111J |s2cid=14772428 |url=https://www.fftw.org/newsplit.pdf }}</ref>
 
==Polynomial multiplication==
All the above multiplication algorithms can also be expanded to multiply [[polynomial]]s. Alternatively the [[Kronecker substitution]] technique may be used to convert the problem of multiplying polynomials into a single binary multiplication.<ref>{{citation |first1 = Joachim |last1 = von zur Gathen | author1-link = Joachim von zur Gathen |first2 = Jürgen | last2 = Gerhard |title = Modern Computer Algebra |publisher = Cambridge University Press |year = 1999 |isbn = 978-0-521-64176-0 |pages = 243–244 |url = httpshttpss://books.google.com/books?id=AE5PN5QGgvUC&pg=PA245 }}.</ref>
 
Long multiplication methods can be generalised to allow the multiplication of algebraic formulae:
Line 466:
———————————————————————————————————————
14a<sup>2</sup>c<sup>2</sup> -17a<sup>2</sup>bc 16ac 3a<sup>2</sup>b<sup>2</sup> -5ab +2
<nowiki>=======================================</nowiki><ref>{{cite book|last1=Castle|first1=Frank|title=Workshop Mathematics|url=httpshttpss://archive.org/details/workshopmathema00castgoog|date=1900|publisher=MacMillan and Co|___location=London|page=[httpshttpss://archive.org/details/workshopmathema00castgoog/page/n88 74]}}</ref>
 
As a further example of column based multiplication, consider multiplying 23 long tons (t), 12 hundredweight (cwt) and 2 quarters (qtr) by 47. This example uses [[avoirdupois]] measures: 1 t = 20 cwt, 1 cwt = 4 qtr.
Line 506:
==Further reading==
* {{Cite book |title=Hacker's Delight |first=Henry S. |last=Warren Jr. |date=2013 |edition=2 |publisher=[[Addison Wesley]] - [[Pearson Education, Inc.]] |isbn=978-0-321-84268-8|title-link=Hacker's Delight }}
* {{cite web |title=Advanced Arithmetic Techniques |author-first=John J. G. |author-last=Savard |date=2018 |orig-year=2006 |work=quadibloc |url=httphttps://www.quadibloc.com/comp/cp0202.htm |access-date=2018-07-16 |url-status=live |archive-url=httpshttpss://web.archive.org/web/20180703001722/httphttps://www.quadibloc.com/comp/cp0202.htm |archive-date=2018-07-03}}
* {{cite book |title=Low Power and Low Complexity Shift-and-Add Based Computations |author-first=Kenny |author-last=Johansson |series=Linköping Studies in Science and Technology |year=2008 |type=Dissertation thesis |id=No. 1201 |publisher=Department of Electrical Engineering, [[Linköping University]]<!-- printed by LiU-Tryck --> |publication-place=Linköping, Sweden |edition=1 |isbn=978-91-7393-836-5 |issn=0345-7524 |url=https://www.diva-portal.org/smash/get/diva2:1733/FULLTEXT02.pdf |access-date=2021-08-23 |url-status=live |archive-url=httpshttpss://web.archive.org/web/20170813200504/httphttps://www.diva-portal.org/smash/get/diva2:1733/FULLTEXT02.pdf |archive-date=2017-08-13}} (x+268 pages)
 
==External links==
 
===Basic arithmetic===
* [httphttps://www.nychold.com/em-arith.html The Many Ways of Arithmetic in UCSMP Everyday Mathematics]
* [httphttps://math.widulski.net/slides/CH05_MustAllGoodThings.ppt A Powerpoint presentation about ancient mathematics]
* [httphttps://www.pedagonet.com/maths/lattice.htm Lattice Multiplication Flash Video]
 
===Advanced algorithms===
* [httphttps://gmplib.org/manual/Multiplication-Algorithms.html#Multiplication%20Algorithms Multiplication Algorithms used by GMP]
 
{{Number-theoretic algorithms}}