Content deleted Content added
Duckmather (talk | contribs) expand lead section #article-section-source-editor Tags: Mobile edit Mobile app edit iOS app edit Disambiguation links added |
m Bot: http → https |
||
(25 intermediate revisions by 16 users not shown) | |||
Line 1:
{{short description|Algorithm to multiply two numbers}}
{{Use dmy dates|date=May 2019|cs1-dates=y}}
A '''multiplication algorithm''' is an [[algorithm]] (or method) to [[multiplication|multiply]] two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Numerous algorithms are known and there has been much research into the topic.
The oldest and simplest method, known since [[Ancient history|antiquity]] as '''long multiplication''' or '''grade-school multiplication''', consists of multiplying every digit in the first number by every digit in the second and adding the results. This has a [[time complexity]] of <math>O(n^2)</math>, where ''n'' is the number of digits. When done by hand, this may also be reframed as [[grid method multiplication]] or [[lattice multiplication]]. In software, this may be called "shift and add" due to [[bitshifts]] and addition being the only two operations needed.
In 1960, [[Anatoly Karatsuba]] discovered [[Karatsuba multiplication]], unleashing a flood of research into fast multiplication algorithms. This method uses three multiplications rather than four to multiply two two-digit numbers. (A variant of this can also be used to multiply [[complex numbers]] quickly.) Done [[recursively]], this has a time complexity of <math>O(n^{\log_2 3})</math>. Splitting numbers into more than two parts results in [[Toom-Cook multiplication]]; for example, using three parts results in the '''Toom-3''' algorithm. Using many parts can set the exponent arbitrarily close to 1, but the constant factor also grows, making it impractical.
In 1968, the [[Schönhage-
Integer multiplication algorithms can also be used to multiply polynomials by means of the method of [[Kronecker substitution]].
Line 32:
====Other notations====
In some countries such as [[Germany]], the above multiplication is depicted similarly but with the original product kept horizontal and computation starting with the first digit of the multiplier:<ref>{{Cite web |title=Multiplication |url=
23958233 · 5830
Line 39:
191665864
71874699
00000000
———————————————
139676498390
Line 104:
|}
followed by addition to obtain 442, either in a single sum (see right), or through forming the row-by-row totals
: (300 + 40) + (90 + 12) = 340 + 102 = 442. This calculation approach (though not necessarily with the explicit grid arrangement) is also known as the [[partial products algorithm]]. Its essence is the calculation of the simple multiplications separately, with all addition being left to the final gathering-up stage.
Line 258 ⟶ 259:
====History of quarter square multiplication====
In prehistoric time, quarter square multiplication involved [[Floor and ceiling functions|floor function]]; that some sources<ref>{{citation |title= Quarter Tables Revisited: Earlier Tables, Division of Labor in Table Construction, and Later Implementations in Analog Computers |last=McFarland |first=David|url=
Antoine Voisin published a table of quarter squares from 1 to 1000 in 1817 as an aid in multiplication. A larger table of quarter squares from 1 to 100000 was published by Samuel Laundy in 1856,<ref>{{Citation |title=Reviews |journal=The Civil Engineer and Architect's Journal |year=1857 |pages=54–55 |url=https://books.google.com/books?id=gcNAAAAAcAAJ&pg=PA54 |postscript=.}}</ref> and a table from 1 to 200000 by Joseph Blater in 1888.<ref>{{Citation|title=Multiplying with quarter squares |first=Neville |last=Holmes| journal=The Mathematical Gazette |volume=87 |issue=509 |year=2003 |pages=296–299 |jstor=3621048|postscript=.|doi=10.1017/S0025557200172778 |s2cid=125040256 }}</ref>
Line 272 ⟶ 273:
{{unsolved|computer science|What is the fastest algorithm for multiplication of two <math>n</math>-digit numbers?}}
A line of research in [[theoretical computer science]] is about the number of single-bit arithmetic operations necessary to multiply two <math>n</math>-bit integers. This is known as the [[computational complexity]] of multiplication. Usual algorithms done by hand have asymptotic complexity of <math>O(n^2)</math>, but in 1960 [[Anatoly Karatsuba]] discovered that better complexity was possible (with the [[Karatsuba algorithm]]).<ref>{{cite web | url=https://youtube.com/watch?v=AMl6EJHfUWo | title= The Genius Way Computers Multiply Big Numbers| website=[[YouTube]]| date= 2 January 2025}}</ref>
Currently, the algorithm with the best computational complexity is a 2019 algorithm of [[David Harvey (mathematician)|David Harvey]] and [[Joris van der Hoeven]], which uses the strategies of using [[number-theoretic transform]]s introduced with the [[Schönhage–Strassen algorithm]] to multiply integers using only <math>O(n\log n)</math> operations.<ref>{{cite journal | last1 = Harvey | first1 = David | last2 = van der Hoeven | first2 = Joris | author2-link = Joris van der Hoeven | doi = 10.4007/annals.2021.193.2.4 | issue = 2 | journal = [[Annals of Mathematics]] | mr = 4224716 | pages = 563–617 | series = Second Series | title = Integer multiplication in time <math>O(n \log n)</math> | volume = 193 | year = 2021| s2cid = 109934776 | url = https://hal.archives-ouvertes.fr/hal-02070778v2/file/nlogn.pdf }}</ref> This is conjectured to be the best possible algorithm, but lower bounds of <math>\Omega(n\log n)</math> are not known.
Line 325 ⟶ 326:
By exploring patterns after expansion, one see following:
<math display="block">\begin{alignat}{5} (x_1 B^{ m} + x_0) (y_1 B^{m} + y_0) (z_1 B^{ m} + z_0) (a_1 B^{ m} + a_0) &=
\end{alignat}</math>
Each summand is associated to a unique binary number from 0 to
Line 336 ⟶ 338:
If we express this in fewer terms, we get:
<math
</math>, where <math> c(i,j) </math> means digit in number i at position j. Notice that <math> c(i,j) \in \{0,1\} </math>
<math display="block">
\begin{align}
z_{0} &= \prod_{j=1}^N x_{j,0}
\\
z_{N} &= \prod_{j=1}^N x_{j,1}▼
\\
▲z_{N} = \prod_{j=1}^N x_{j,1}
z_{N-1} &= \prod_{j=1}^N (x_{j,0} + x_{j,1}) - \sum_{i \ne N-1}^{N} z_i▼
\end{align}
▲z_{N-1} = \prod_{j=1}^N (x_{j,0} + x_{j,1}) - \sum_{i \ne N-1}^{N} z_i
</math>
Line 365 ⟶ 363:
{{Main|Schönhage–Strassen algorithm}}
[[File:Integer multiplication by FFT.svg|thumb|350px|Demonstration of multiplying 1234 × 5678 = 7006652 using fast Fourier transforms (FFTs). [[Number-theoretic transform]]s in the integers modulo 337 are used, selecting 85 as an 8th root of unity. Base 10 is used in place of base 2<sup>''w''</sup> for illustrative purposes.]]
Every number in base B, can be written as a polynomial:
<math display="block"> X = \sum_{i=0}^N {x_iB^i} </math>
Furthermore, multiplication of two numbers could be thought of as a product of two polynomials:
<math display="block">XY = (\sum_{i=0}^N {x_iB^i})(\sum_{j=0}^N {y_iB^j}) </math>
Because,for <math> B^k </math>: <math>c_k =\sum_{(i,j):i+j=k} {a_ib_j} = \sum_{i=0}^k {a_ib_{k-i}} </math>,
Line 380 ⟶ 377:
By using fft (fast fourier transformation) with convolution rule, we can get
<math display="block"> \hat{f}(a * b) = \hat{f}(\sum_{i=0}^k {a_ib_{k-i}}) = \hat{f}(a)
is the corresponding coefficient in fourier space. This can also be written as: <math>\mathrm{fft}(a * b) = \mathrm{fft}(a)
Line 387 ⟶ 384:
only consist of one unique term per coefficient:
<math display="block"> \hat{f}(x^n) = \left(\frac{i}{2\pi}\right)^n \delta^{(n)} </math> and
<math display="block"> \hat{f}(a\, X(\xi) + b\, Y(\xi)) = a\, \hat{X}(\xi) + b\, \hat{Y}(\xi)</math>
Convolution rule: <math> \hat{f}(X * Y) = \ \hat{f}(X) </math> ● <math> \hat{f}(Y) </math>▼
▲* Convolution rule: <math> \hat{f}(X * Y) = \ \hat{f}(X)
We have reduced our convolution problem
Line 398 ⟶ 393:
By finding ifft (polynomial interpolation), for each <math>c_k </math>, one get the desired coefficients.
Algorithm uses divide and conquer strategy, to divide problem to subproblems.
Line 405 ⟶ 400:
==== History ====
The algorithm was invented by [[Volker Strassen|Strassen]] (1968). It was made practical and theoretical guarantees were provided in 1971 by [[Arnold Schönhage|Schönhage]] and Strassen resulting in the [[Schönhage–Strassen algorithm]].<ref name="schönhage">{{cite journal |first1=A. |last1=Schönhage |first2=V. |last2=Strassen |title=Schnelle Multiplikation großer Zahlen |journal=Computing |volume=7 |issue= 3–4|pages=281–292 |date=1971 |doi=10.1007/BF02242355 |s2cid=9738629 |url=https://link.springer.com/article/10.1007/BF02242355|url-access=subscription }}</ref>
=== Further improvements ===
In 2007 the [[asymptotic complexity]] of integer multiplication was improved by the Swiss mathematician [[Martin Fürer]] of Pennsylvania State University to
In 2014, Harvey, [[Joris van der Hoeven]] and Lecerf<ref>{{cite journal
Line 427 ⟶ 422:
===Lower bounds===
There is a trivial lower bound of [[Big O notation#Family of Bachmann–Landau notations|Ω]](''n'') for multiplying two ''n''-bit numbers on a single processor; no matching algorithm (on conventional machines, that is on Turing equivalent machines) nor any sharper lower bound is known. The [[Hartmanis–Stearns conjecture]] would imply that <math>O(n)</math> cannot be achieved. Multiplication lies outside of [[ACC0|AC<sup>0</sup>[''p'']]] for any prime ''p'', meaning there is no family of constant-depth, polynomial (or even subexponential) size circuits using AND, OR, NOT, and MOD<sub>''p''</sub> gates that can compute a product. This follows from a constant-depth reduction of MOD<sub>''q''</sub> to multiplication.<ref>{{cite book |first1=Sanjeev |last1=Arora |first2=Boaz |last2=Barak |title=Computational Complexity: A Modern Approach |publisher=Cambridge University Press |date=2009 |isbn=978-0-521-42426-4 |url={{GBurl|8Wjqvsoo48MC|pg=PR7}}}}</ref> Lower bounds for multiplication are also known for some classes of [[branching program]]s.<ref>{{cite journal |first1=F. |last1=Ablayev |first2=M. |last2=Karpinski |title=A lower bound for integer multiplication on randomized ordered read-once branching programs |journal=Information and Computation |volume=186 |issue=1 |pages=78–89 |date=2003 |doi=10.1016/S0890-5401(03)00118-4 |url=https://core.ac.uk/download/pdf/82445954.pdf}}</ref>
==Complex number multiplication==
Line 457 ⟶ 452:
This algorithm uses only three multiplications, rather than four, and five additions or subtractions rather than two. If a multiply is more expensive than three adds or subtracts, as when calculating by hand, then there is a gain in speed. On modern computers a multiply and an add can take about the same time so there may be no speed gain. There is a trade-off in that there may be some loss of precision when using floating point.
For [[fast Fourier transform]]s (FFTs) (or any [[Linear map|linear transformation]]) the complex multiplies are by constant coefficients ''c'' + ''di'' (called [[twiddle factor]]s in FFTs), in which case two of the additions (''d''−''c'' and ''c''+''d'') can be precomputed. Hence, only three multiplies and three adds are required.<ref>{{cite journal |first1=P. |last1=Duhamel |first2=M. |last2=Vetterli |title=Fast Fourier transforms: A tutorial review and a state of the art |journal=Signal Processing |volume=19 |issue=4 |pages=259–299 See Section 4.1 |date=1990 |doi=10.1016/0165-1684(90)90158-U |bibcode=1990SigPr..19..259D |url=https://core.ac.uk/download/pdf/147907050.pdf}}</ref> However, trading off a multiplication for an addition in this way may no longer be beneficial with modern [[floating-point unit]]s.<ref>{{cite journal |first1=S.G. |last1=Johnson |first2=M. |last2=Frigo |title=A modified split-radix FFT with fewer arithmetic operations |journal=IEEE Trans. Signal Process. |volume=55 |issue= 1|pages=111–9 See Section IV |date=2007 |doi=10.1109/TSP.2006.882087 |bibcode=2007ITSP...55..111J |s2cid=14772428 |url=https://www.fftw.org/newsplit.pdf }}</ref>
==Polynomial multiplication==
Line 501 ⟶ 496:
* [[Horner scheme]] for evaluating of a polynomial
* [[Logarithm]]
* [[Matrix multiplication algorithm]]
* [[Mental calculation]]
* [[Number-theoretic transform]]
Line 520 ⟶ 516:
===Basic arithmetic===
* [
* [
* [
===Advanced algorithms===
* [
{{Number-theoretic algorithms}}
|