Content deleted Content added
→Minimizing number of multiplications: reword, naive algorithm only requires... |
Citation bot (talk | contribs) Removed URL that duplicated identifier. | Use this bot. Report bugs. | Suggested by Abductive | Category:Computational complexity theory | #UCB_Category 71/110 |
||
(86 intermediate revisions by 36 users not shown) | |||
Line 1:
{{Short description|Algorithmic runtime requirements for matrix multiplication}}
{{CS1 config|mode=cs1}}
{{unsolved|computer science|What is the fastest algorithm for matrix multiplication?}}
In [[theoretical computer science]], the '''computational complexity of matrix multiplication''' dictates [[Analysis of algorithms|how quickly]] the operation of [[matrix multiplication]] can be performed. [[Matrix multiplication algorithm]]s are a central subroutine in theoretical and [[numerical algorithm|numerical]] algorithms for [[numerical linear algebra]] and [[optimization]], so finding the
Directly applying the mathematical definition of matrix multiplication gives an algorithm that requires {{math|''n''<sup>3</sup>}} [[Field (mathematics)|field]] operations to multiply two {{math|''n'' × ''n''}} matrices over that field ({{math|Θ(''n''<sup>3</sup>)}} in [[big O notation]]). Surprisingly, algorithms exist that provide better running times than this straightforward "schoolbook algorithm". The first to be discovered was [[Strassen algorithm|Strassen's algorithm]], devised by [[Volker Strassen]] in 1969 and often referred to as "fast matrix multiplication".<ref name="strassen69">
Line 17 ⟶ 18:
}}</ref> The optimal number of field operations needed to multiply two square {{math|''n'' × ''n''}} matrices [[big O notation|up to constant factors]] is still unknown. This is a major open question in [[theoretical computer science]].
{{As of|
| last1=Alman▼
| first1=Josh▼
| last2=Williams▼
| first2=Virginia Vassilevska▼
| contribution=A Refined Laser Method and Faster Matrix Multiplication▼
| year = 2020▼
| arxiv=2010.05846▼
▲}}</ref><ref>{{Cite web|last=Hartnett|first=Kevin|title=Matrix Multiplication Inches Closer to Mythic Goal|url=https://www.quantamagazine.org/mathematicians-inch-closer-to-matrix-multiplication-goal-20210323/|access-date=2021-04-01|website=Quanta Magazine|date=23 March 2021 |language=en}}</ref> However, this and similar improvements to Strassen are not used in practice, because they are [[galactic algorithm]]s: the constant coefficient hidden by the [[Big O notation]] is so large that they are only worthwhile for matrices that are too large to handle on present-day computers.<ref>{{citation
| last = Iliopoulos
| first = Costas S.
Line 46 ⟶ 37:
| archive-date = 2014-03-05
| url-status = dead
}}</ref><ref name="robinson">{{
== Simple algorithms ==
If ''A'', ''B'' are two {{math|''n'' × ''n''}} matrices over a field, then their product ''AB'' is also an {{math|''n'' × ''n''}} matrix over that field, defined entrywise as
:<math>
(AB)_{ij} = \sum_{k = 1}^n A_{ik} B_{kj}.
Line 68 ⟶ 59:
'''output''' ''C'' (as A*B)
This [[algorithm]] requires
=== Strassen's algorithm ===
{{Main|Strassen algorithm}}
Strassen's algorithm improves on naive matrix multiplication through a [[Divide-and-conquer algorithm|divide-and-conquer]] approach. The key observation is that multiplying two {{math|2 × 2}} matrices can be done with only
Unlike algorithms with faster asymptotic complexity, Strassen's algorithm is used in practice. The [[numerical stability]] is reduced compared to the naive algorithm,<ref>{{
== Matrix multiplication exponent ==
[[File:MatrixMultComplexity svg.svg|thumb|400px|right|Improvement of estimates of exponent {{math|ω}} over time for the computational complexity of matrix multiplication <math>O(n^\omega)</math>
[[File:MatrixMultComplexity1990 svg.svg|thumb|400px|right|Closeup of 1990–2024]]
{| class="wikitable floatright"
|+ Timeline of matrix multiplication exponent
Line 85 ⟶ 77:
| 1969 || 2.8074 || [[Volker Strassen|Strassen]]<ref name="strassen69" />
|-
| 1978 || 2.796 || [[Victor Pan|Pan]]<ref>
{{cite book
| doi=10.1109/SFCS.1978.34
Line 97 ⟶ 89:
}}</ref>
|-
| 1979 || 2.780 || Bini, {{ill|Milvio Capovani|it|lt=Capovani}}, Romani<ref>
{{cite journal
| doi=10.1016/0020-0190(79)90113-3
Line 111 ⟶ 103:
| pages=234–235
| date=Jun 1979
| url-access=subscription
}}</ref>
|-
| 1981 || 2.522 || [[Arnold Schönhage|Schönhage]]<ref>
{{cite journal
| doi=10.1137/0210032
Line 137 ⟶ 130:
}}</ref><!-- Note: Romani and Coppersmith-Winograd do not cite each other, so these appear to be simultaneous. -->
|-
| 1981 || 2.496 || [[Don Coppersmith|Coppersmith]], [[Shmuel Winograd|Winograd]]<ref name="CW.81">
{{cite book
| doi=10.1109/SFCS.1981.27
Line 157 ⟶ 150:
| pages=49–54
| date=Oct 1986
| isbn=0-8186-0740-8
| s2cid=15077423
}}</ref>
Line 185 ⟶ 179:
}}</ref>
|-
|
{{cite book
| doi=10.1145/2213977.2214056
Line 196 ⟶ 190:
| pages=887–898
| year=2012
| isbn=978-1-4503-1245-5
| s2cid=14350287
}}</ref><ref name="Williams.2014">
{{cite report
Line 209 ⟶ 204:
|-
| 2014 || 2.3728639 || Le Gall<ref name="LeGall2014">
{{cite book
| last1=Le Gall
| first1=François
Line 224 ⟶ 219:
}}</ref>
|-
| 2020 || 2.3728596 || Alman, [[Virginia Vassilevska Williams|Williams]]<ref name="aw20"
{{cite journal
▲ | last1=Alman
▲ | first1=Josh
▲ | last2=Williams
▲ | first2=Virginia Vassilevska
▲ | arxiv=2010.05846
| journal=Theoretics
| doi=10.46298/theoretics.24.21
}}</ref><ref>{{Cite web|last=Hartnett|first=Kevin|title=Matrix Multiplication Inches Closer to Mythic Goal|url=https://www.quantamagazine.org/mathematicians-inch-closer-to-matrix-multiplication-goal-20210323/|access-date=2021-04-01|website=Quanta Magazine|date=23 March 2021 |language=en}}</ref>
|-
| 2022 || 2.371866 || Duan, Wu, Zhou<ref name="dwz22">
{{cite arXiv |eprint=2210.10173 |class=cs.DS |first1=Ran |last1=Duan |first2=Hongxun |last2=Wu |title=Faster Matrix Multiplication via Asymmetric Hashing |last3=Zhou |first3=Renfei |year=2022}}</ref>
|-
| 2024 || 2.371552 || [[Virginia Vassilevska Williams|Williams]], Xu, Xu, and Zhou<ref name="wxxz23">{{cite conference |last1=Vassilevska Williams |first1=Virginia |last2=Xu |first2=Yinzhan |last3=Xu |first3=Zixuan |last4=Zhou |first4=Renfei |title=New Bounds for Matrix Multiplication: from Alpha to Omega |conference=Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA) |pages=3792–3835 |arxiv=2307.07970 |doi=10.1137/1.9781611977912.134}}
</ref>
|-
| 2024 || 2.371339 || Alman, Duan, [[Virginia Vassilevska Williams|Williams]], Xu, Xu, and Zhou<ref name="adwxxz24"/>
|}
The ''matrix multiplication exponent'', usually denoted {{math|ω}}, is the smallest real number for which any two <math>n\times n</math>
Using a naive lower bound and schoolbook matrix multiplication for the upper bound, one can straightforwardly conclude that {{math|2 ≤ ω ≤ 3}}. Whether {{math|1=ω = 2}} is a major open question in [[theoretical computer science]], and there is a line of research developing matrix multiplication algorithms to get improved bounds on {{math|ω}}.
Nonetheless, the above are classical examples of [[Galactic algorithm#Matrix multiplication|galactic algorithms]]. On the opposite, the above Strassen's algorithm of 1969 and [[Victor Pan|Pan's]] algorithm of 1978, whose respective exponents are slightly above and below 2.78, have constant coefficients that make them feasible.<ref>{{cite journal | last1=Laderman | first1=Julian | last2=Pan | first2=Victor |last3=Sha | first3=Xuan-He | title=On practical algorithms for accelerated matrix multiplication | year=1992 | journal=Linear Algebra and Its Applications | volume=162-164 | pages=557–588 | doi=10.1016/0024-3795(92)90393-O}}</ref>
=== Group theory reformulation of matrix multiplication algorithms ===
[[Henry Cohn]], [[Robert Kleinberg]], [[Balázs Szegedy]] and [[Chris Umans]] put methods such as the Strassen and Coppersmith–Winograd algorithms in an entirely different [[group theory|group-theoretic]] context, by utilising triples of subsets of finite groups which satisfy a disjointness property called the [[Triple product property|triple product property (TPP)]]. They also give conjectures that, if true, would imply that there are matrix multiplication algorithms with essentially quadratic complexity. This implies that the optimal exponent of matrix multiplication is 2, which most researchers believe is indeed the case.<ref name="robinson"/> One such conjecture is that families of [[wreath product]]s of [[Abelian group]]s with symmetric groups realise families of subset triples with a simultaneous version of the TPP.<ref>{{Cite book | last1 = Cohn | first1 = H. | last2 = Kleinberg | first2 = R. | last3 = Szegedy | first3 = B. | last4 = Umans | first4 = C. | chapter = Group-theoretic Algorithms for Matrix Multiplication | doi = 10.1109/SFCS.2005.39 | title = 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05) | pages = 379 | year = 2005 | arxiv = math/0511460 | isbn = 0-7695-2468-0 | s2cid = 41278294 | url = https://authors.library.caltech.edu/23966/ }}</ref><ref>{{cite book |first1=Henry |last1=Cohn
=== Lower bounds for ω ===
There is a trivial lower bound of {{tmath|\omega \ge 2}}. Since any algorithm for multiplying two {{math|''n'' × ''n''}}-matrices has to process all {{math|2''n''<sup>2</sup>}} entries, there is a trivial asymptotic lower bound of {{math|Ω(''n''<sup>2</sup>)}} operations for any matrix multiplication algorithm. Thus {{tmath|2\le \omega < 2.
The exponent ω is defined to be a [[Accumulation point|limit point]], in that it is the infimum of the exponent over all matrix multiplication algorithms. It is known that this limit point is not achieved. In other words, under the model of computation typically studied, there is no matrix multiplication algorithm that uses precisely {{math|O(''n''<sup>ω</sup>)}} operations; there must be an additional factor of {{math|''n''<sup>o(1)</sup>}}.<ref name="CW.81"/>
=== Rectangular matrix multiplication ===
Similar techniques also apply to rectangular matrix multiplication. The central object of study is <math>\omega(k)</math>, which is the smallest <math>c</math> such that one can multiply a matrix of size <math>n\times \lceil n^k\rceil</math> with a matrix of size <math>\lceil n^k\rceil \times n</math> with <math>O(n^{c + o(1)})</math> arithmetic operations. A result in algebraic complexity states that multiplying matrices of size <math>n\times \lceil n^k\rceil</math> and <math>\lceil n^k\rceil \times n</math> requires the same number of arithmetic operations as multiplying matrices of size <math>n\times \lceil n^k\rceil</math> and <math>n \times n</math> and of size <math>n \times n</math> and <math>n\times \lceil n^k\rceil</math>, so this encompasses the complexity of rectangular matrix multiplication.<ref name="gall18">{{
| last1 = Gall | first1 = Francois Le | | editor-last = Czumaj | editor-first = Artur | arxiv = 1708.05622 | contribution = Improved | | | | | year = 2018| isbn = 978-1-61197-503-1
}}</ref> This generalizes the square matrix multiplication exponent, since <math>\omega(1) = \omega</math>.
Since the output of the matrix multiplication problem is size <math>n^2</math>, we have <math>\omega(k) \geq 2</math> for all values of <math>k</math>. If one can prove for some values of <math>k</math> between 0 and 1 that <math>\omega(k) \leq 2</math>, then such a result shows that <math>\omega(k) = 2</math> for those <math>k</math>. The largest ''k'' such that <math>\omega(k) = 2</math> is known as the ''dual matrix multiplication exponent'', usually denoted ''α''. ''α'' is referred to as the "[[Duality (optimization)|dual]]" because showing that <math>\alpha = 1</math> is equivalent to showing that <math>\omega = 2</math>. Like the matrix multiplication exponent, the dual matrix multiplication exponent sometimes appears in the complexity of algorithms in numerical linear algebra and optimization.<ref>{{Cite journal|last1=Cohen|first1=Michael B.|last2=Lee|first2=Yin Tat|last3=Song|first3=Zhao|date=2021-01-05|title=Solving Linear Programs in the Current Matrix Multiplication Time|url=https://doi.org/10.1145/3424305|journal=Journal of the ACM|volume=68|issue=1|pages=3:1–3:39|doi=10.1145/3424305|issn=0004-5411|arxiv=1810.07896|s2cid=231955576 }}</ref>
The first bound on ''α'' is by [[Don Coppersmith|Coppersmith]] in 1982, who showed that <math>\alpha > 0.17227</math>.<ref>{{Cite journal|last=Coppersmith|first=D.|date=1982-08-01|title=Rapid Multiplication of Rectangular Matrices|url=https://epubs.siam.org/doi/10.1137/0211037|journal=SIAM Journal on Computing|volume=11|issue=3|pages=467–471|doi=10.1137/0211037|issn=0097-5397|url-access=subscription}}</ref> The current best peer-reviewed bound on ''α'' is <math>\alpha
==Related problems==
Line 300 ⟶ 329:
===Minimizing number of multiplications===
Related to the problem of minimizing the number of arithmetic operations is minimizing the number of multiplications, which is typically a more costly operation than addition. A <math>O(n^\omega)</math> algorithm for matrix multiplication must necessarily only use <math>O(n^\omega)</math> multiplication operations, but these algorithms are impractical. Improving from the naive <math>n^3</math> multiplications for schoolbook multiplication, <math>4\times 4</math> matrices in <math>\mathbb{Z}/2\mathbb{Z}</math> can be done with 47 multiplications,<ref>
| last = Rosowski | first = Andreas
| arxiv = 1904.07683
| doi = 10.1016/j.jsc.2022.05.002
| journal = Journal of Symbolic Computation
| mr = 4433063
| pages = 302–321
| title = Fast commutative matrix algorithms
| volume = 114
| year = 2023}}</ref><ref>{{cite journal |last1=Makarov |first1=O. M. |title=An algorithm for multiplying 3×3 matrices |journal=Zhurnal Vychislitel'noi Matematiki I Matematicheskoi Fiziki |volume=26 |issue=2 |year=1986 |pages=293–294 |url=https://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=zvmmf&paperid=4056&option_lang=eng |access-date=5 October 2022}}
:Also in {{cite journal |doi=10.1016/0041-5553(86)90203-X |title=An algorithm for multiplying 3×3 matrices |year=1986 |last1=Makarov |first1=O. M. |journal=USSR Computational Mathematics and Mathematical Physics |volume=26 |pages=179–180 }}</ref> (23 if non-commutative<ref>{{Cite journal |last=Laderman |first=Julian D. |date=1976 |title=A noncommutative algorithm for multiplying 3×3 matrices using 23 multiplications |url=https://www.ams.org/bull/1976-82-01/S0002-9904-1976-13988-2/ |journal=Bulletin of the American Mathematical Society |language=en |volume=82 |issue=1 |pages=126–128 |doi=10.1090/S0002-9904-1976-13988-2 |issn=0002-9904|doi-access=free }}</ref>). The lower bound of multiplications needed is 2''mn''+2''n''−''m''−2 (multiplication of ''n''×''m'' matrices with ''m''×''n'' matrices using the substitution method, <math>m \ge n \ge 3</math>), which means n=3 case requires at least 19 multiplications and n=4 at least 34.<ref>{{Cite journal |last=Bläser |first=Markus |date=February 2003 |title=On the complexity of the multiplication of matrices of small formats |journal=Journal of Complexity |language=en |volume=19 |issue=1 |pages=43–60 |doi=10.1016/S0885-064X(02)00007-9|doi-access=free }}</ref> For n=2 optimal seven multiplications and 15 additions are minimal, compared to only four additions for eight multiplications.<ref>{{Cite journal |last=Winograd |first=S. |date=1971-10-01 |title=On multiplication of 2 × 2 matrices |journal=Linear Algebra and Its Applications |language=en |volume=4 |issue=4 |pages=381–388 |doi=10.1016/0024-3795(71)90009-7 |issn=0024-3795|doi-access=free }}</ref><ref>{{Cite book |last=L. |first=Probert, R. |title=On the complexity of matrix multiplication |date=1973 |publisher=University of Waterloo |oclc=1124200063}}</ref>
==See also==
Line 316 ⟶ 355:
== External links ==
*[https://fmm.univ-lille.fr/ Yet another catalogue of fast matrix multiplication algorithms]
*
[[Category:Computer arithmetic algorithms]]
|