User:Fawly/Computational complexity of matrix multiplication: Difference between revisions
Content deleted Content added
No edit summary |
No edit summary |
||
(24 intermediate revisions by the same user not shown) | |||
Line 1:
In [[theoretical computer science]], an active area of research is determining [[Analysis of algorithms|how efficient]] the operation of [[matrix multiplication]] can be performed. [[Matrix multiplication algorithm]]s are a central subroutine in theoretical and [[numerical algorithm|numerical]] algorithms for [[numerical linear algebra]] and [[optimization]], so finding the right amount of time it should take is of major practical relevance.
Directly applying the mathematical definition of matrix multiplication gives an algorithm that
{{As of|2020|12}}, the matrix multiplication algorithm with best [[Time complexity|asymptotic complexity]] runs in {{math|O(''n''<sup>2.3728596</sup>)}} time, given by Josh Alman and [[Virginia Vassilevska Williams]],<ref name="aw20">
{{Citation
| last1=Alman
| first1=Josh
| last2=Williams
| first2=Virginia Vassilevska
| contribution=A Refined Laser Method and Faster Matrix Multiplication
| year = 2020
| arxiv=2010.05846
| title = 32nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2021)
|url=https://www.siam.org/conferences/cm/program/accepted-papers/soda21-accepted-papers
}}</ref><ref>{{Cite web|last=Hartnett|first=Kevin|title=Matrix Multiplication Inches Closer to Mythic Goal|url=https://www.quantamagazine.org/mathematicians-inch-closer-to-matrix-multiplication-goal-20210323/|access-date=2021-04-01|website=Quanta Magazine|language=en}}</ref>. However, this and similar improvements to Strassen are not used in practice, because constant coefficient hidden by the [[Big O notation]] is so large that these [[galactic algorithm]]s are only worthwhile for matrices that are too large to handle on present-day computers.<ref>{{citation
| last = Iliopoulos
| first = Costas S.
Line 33 ⟶ 34:
}}</ref><ref name="robinson">{{Citation | last=Robinson | first=Sara | title=Toward an Optimal Algorithm for Matrix Multiplication | url=https://archive.siam.org/pdf/news/174.pdf | date=November 2005 | journal=SIAM News | volume=38 | issue=9 | quote=Even if someone manages to prove one of the conjectures—thereby demonstrating that {{math|1=''ω'' = 2}}—the wreath product approach is unlikely to be applicable to the large matrix problems that arise in practice. [...] the input matrices must be astronomically large for the difference in time to be apparent.}}</ref>
== Simple algorithms ==
If ''A'', ''B'' are {{math|''n'' × ''n''}} matrices over a field, then their product ''AB'' is also an {{math|''n'' × ''n''}} matrix over that field, defined entrywise as
:<math>
(AB)_{ij} = \sum_{k = 1}^n A_{ik} B_{kj}.
</math>
=== Schoolbook algorithm ===
{{For|implementation techniques (in particular parallel and distributed algorithms)|Matrix multiplication algorithm}}
The simplest approach to computing the product of two {{math|''n'' × ''n''}} matrices ''A'' and ''B'' is to compute the arithmetic expressions coming from the definition of matrix multiplication. In pseudocode:
input is A and B, both n by n matrices
initialize C to be an n by n matrix of all zeros
for i from 1 to n:
for j from 1 to n:
for k from 1 to n:
C[i][j] = C[i][j] + A[i][k]*B[k][j]
output C (as A*B)
This [[algorithm]] requires, in the [[worst-case complexity|worst case]], {{tmath|n^3}} multiplications of scalars and {{tmath|n^3 - n}} additions for computing the product of two square {{math|''n''×''n''}} matrices. Its [[computational complexity]] is therefore {{tmath|O(n^3)}}, in a [[model of computation]] where field operations (addition and multiplication) take constant time (in practice, this is the case for [[floating point]] numbers, but not necessarily for integers).
=== Strassen's algorithm ===
{{Main|Strassen algorithm}}
Strassen's algorithm is based on a way of multiplying two {{math|2 × 2}}-matrices which requires only 7 multiplications (instead of the usual 8), at the expense of several additional addition and subtraction operations. Applying this recursively gives an algorithm with a multiplicative cost of <math>O( n^{\log_{2}7}) \approx O(n^{2.807})</math>. Strassen's algorithm is more complex, and the [[numerical stability]] is reduced compared to the naïve algorithm,<ref>{{Citation | last1=Miller | first1=Webb | title=Computational complexity and numerical stability | citeseerx = 10.1.1.148.9947 | year=1975 | journal=SIAM News | volume=4 | issue=2 | pages=97–107 | doi=10.1137/0204009}}</ref> but it is faster in cases where {{math|''n'' > 100}} or so<ref name="skiena"/> and appears in several libraries, such as [[Basic Linear Algebra Subprograms|BLAS]].<ref>{{cite book |last1=Press |first1=William H. |last2=Flannery |first2=Brian P. |last3=Teukolsky |first3=Saul A. |author3-link=Saul Teukolsky |last4=Vetterling |first4=William T. |title=Numerical Recipes: The Art of Scientific Computing |publisher=[[Cambridge University Press]] |edition=3rd |isbn=978-0-521-88068-8 |year=2007 |page=[https://archive.org/details/numericalrecipes00pres_033/page/n131 108]|title-link=Numerical Recipes }}</ref> It is very useful for large matrices over exact domains such as [[finite field]]s, where numerical stability is not an issue.
== The matrix multiplication exponent ==
[[File:MatrixMultComplexity svg.svg|thumb|400px|right|Improvement of estimates of exponent {{math|ω}} over time for the computational complexity of matrix multiplication <math>O(n^\omega)</math>.]]
{| class="wikitable floatright"
|+ Timeline of matrix multiplication exponent
! Year !! Bound on omega !! Authors
|-
| 1969 || 2.8074 || [[Volker Strassen|Strassen]]<ref>
{{cite journal
| doi=10.1007/BF02165411
Line 54 ⟶ 80:
| s2cid = 121656251
}}</ref>
|-
| 1978 || 2.796 || Pan<ref>
{{cite book
| doi=10.1109/SFCS.1978.34
Line 64 ⟶ 91:
| date=Oct 1978
| s2cid= 14348408
}}</
|-
| 1979 || 2.780 || Bini, Capovani, Romani<ref>
{{cite journal
| doi=10.1016/0020-0190(79)90113-3
Line 78 ⟶ 107:
| pages=234–235
| date=Jun 1979
}}</
|-
| 1981 || 2.522 || Schönhage<ref>
{{cite journal
| doi=10.1137/0210032
Line 88 ⟶ 119:
| pages=434–455
| year=1981
}}</
|-
| 1981 || 2.517 || Romani<ref>
{{cite journal
| doi=10.1137/0211020
Line 98 ⟶ 131:
| pages=263–267
| year=1982
}}</ref><!-- Note: Romani and Coppersmith-Winograd do not cite each other, so these appear to be simultaneous. -->
|-
| 1981 || 2.496 || [[Don Coppersmith|Coppersmith]], [[Shmuel Winograd|Winograd]]<ref>
{{cite book
| doi=10.1109/SFCS.1981.27
Line 108 ⟶ 143:
| year=1981
| s2cid=206558664
}}</
|-
| 1986 || 2.479 || [[Volker Strassen|Strassen]]<ref>
{{cite book
| doi=10.1109/SFCS.1986.52
Line 118 ⟶ 155:
| s2cid=15077423
}}</ref>
|-
| 1990 || 2.3755 || [[Don Coppersmith|Coppersmith]], [[Shmuel Winograd|Winograd]]<ref>
{{cite journal
| url=https://www.sciencedirect.com/science/article/pii/S0747717108800132
Line 130 ⟶ 168:
| date=Mar 1990
| doi=10.1016/S0747-7171(08)80013-2
}}</ref>
|-
| 2010 || 2.3737 || Stothers<ref>
{{cite thesis
| type=Ph.D. thesis
Line 150 ⟶ 180:
| year=2010
}}</ref>
|-
| 2013 || 2.3729 || [[Virginia Vassilevska Williams|Williams]]<ref>
{{cite book
| doi=10.1145/2213977.2214056
Line 161 ⟶ 192:
| pages=887–898
| year=2012
}}</ref><ref name="Williams.2014">
{{cite report
| last=Williams
| first=Virginia Vassilevska
| author-link=Virginia Vassilevska Williams
| title=Multiplying matrices in <math>O(n^{2.373})</math> time
| institution=Stanford University
| type=Technical Report
| url=http://www.cs.stanford.edu/~virgi/matrixmult-f.pdf
}}</ref>
|-
| 2014 || 2.3728639 || Le Gall<ref name="LeGall2014">
{{Citation
| last1=Le Gall
Line 175 ⟶ 216:
| bibcode=2014arXiv1401.7714L
}}</ref>
|-
| 2020 || 2.3728596 || Alman, [[Virginia Vassilevska Williams|Williams]]<ref name="aw20"/>
|}
The '''matrix multiplication exponent''', usually denoted {{math|ω}}, is the smallest real number for which any <math>n\times n</math> matrix over a field can be multiplied together using <math>n^{\omega + o(1)}</math> field operations. This notation is commonly used in [[algorithm]]s research, algorithms using matrix multiplication as a subroutine have meaningful bounds on running time regardless of the true value of {{math|ω}}.
Using a naive lower bound and schoolbook matrix multiplication for the upper bound, one can straightforwardly conclude that {{math|2 ≤ ω ≤ 3}}. Whether {{math|1=ω = 2}} is a major open question in [[theoretical computer science]], and there is a line of research developing matrix multiplication algorithms to get improved bounds on {{math|ω}}.
The current best bound on {{math|ω}} is {{math|ω < 2.3728596}}, by Josh Alman and [[Virginia Vassilevska Williams]].<ref name="aw20"/> This algorithm, like all other recent algorithms in this line of research, uses the ''laser method'', a generalization of the Coppersmith–Winograd algorithm, which was given by [[Don Coppersmith]] and [[Shmuel Winograd]] in 1990, was the best matrix multiplication algorithm until 2010, and has an asymptotic complexity of {{math|''O''(''n''<sup>2.375477</sup>)}}.<ref name="coppersmith">{{Citation|doi=10.1016/S0747-7171(08)80013-2 |title=Matrix multiplication via arithmetic progressions |url=http://www.cs.umd.edu/~gasarch/TOPICS/ramsey/matrixmult.pdf |year=1990 |last1=Coppersmith |first1=Don |last2=Winograd |first2=Shmuel |journal=Journal of Symbolic Computation |volume=9|issue=3|pages=251}}</ref> The conceptual idea of these algorithms are similar to Strassen's algorithm: a way is devised for multiplying two {{math|''k'' × ''k''}}-matrices with fewer than {{math|''k''<sup>3</sup>}} multiplications, and this technique is applied recursively. The laser method has limitations to its power, and cannot be used to show that {{math|ω < 2.3725}}.<ref>{{Cite journal|last=Ambainis|first=Andris|last2=Filmus|first2=Yuval|last3=Le Gall|first3=François|date=2015-06-14|title=Fast Matrix Multiplication: Limitations of the Coppersmith-Winograd Method|url=https://doi.org/10.1145/2746539.2746554|journal=Proceedings of the forty-seventh annual ACM symposium on Theory of Computing|series=STOC '15|___location=Portland, Oregon, USA|publisher=Association for Computing Machinery|pages=585–593|doi=10.1145/2746539.2746554|isbn=978-1-4503-3536-2}}</ref>
=== Group theory reformulation of matrix multiplication algorithms ===
[[Henry Cohn]], [[Robert Kleinberg]], [[Balázs Szegedy]] and [[Chris Umans]] put methods such as the Strassen and Coppersmith–Winograd algorithms in an entirely different [[group theory|group-theoretic]] context, by utilising triples of subsets of finite groups which satisfy a disjointness property called the [[Triple product property|triple product property (TPP)]]. They also give conjectures that, if true, would imply that there are matrix multiplication algorithms with essentially quadratic complexity. This implies that the optimal exponent of matrix multiplication is 2, which most researchers believe is indeed the case.<ref name="robinson"/> One such conjecture is that families of [[wreath product]]s of [[Abelian group]]s with symmetric groups realise families of subset triples with a simultaneous version of the TPP.<ref>{{Cite book | last1 = Cohn | first1 = H. | last2 = Kleinberg | first2 = R. | last3 = Szegedy | first3 = B. | last4 = Umans | first4 = C. | chapter = Group-theoretic Algorithms for Matrix Multiplication | doi = 10.1109/SFCS.2005.39 | title = 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05) | pages = 379 | year = 2005 | isbn = 0-7695-2468-0 | s2cid = 41278294 | url = https://authors.library.caltech.edu/23966/ }}</ref><ref>Henry Cohn, Chris Umans. A Group-theoretic Approach to Fast Matrix Multiplication. {{arxiv|math.GR/0307321}}. ''Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science'', 11–14 October 2003, Cambridge, MA, IEEE Computer Society, pp. 438–449.</ref> Several of their conjectures have since been disproven by Blasiak, Cohn, Church, Grochow, Naslund, Sawin, and Umans using the Slice Rank method.<ref>{{Cite book | last1 = Blasiak | first1 = J. | last2 = Cohn | first2 = H. | last3 = Church | first3 = T. | last4 = Grochow | first4 = J. | last5 = Naslund | first5= E. | last6 = Sawin | first6 = W. | last7=Umans | first7= C.| chapter= On cap sets and the group-theoretic approach to matrix multiplication | doi = 10.19086/da.1245 | title = Discrete Analysis | year = 2017 | s2cid = 9687868 | url = http://discreteanalysisjournal.com/article/1245-on-cap-sets-and-the-group-theoretic-approach-to-matrix-multiplication}}</ref> Further, Alon, Shpilka and [[Chris Umans]] have recently shown that some of these conjectures implying fast matrix multiplication are incompatible with another plausible conjecture, the [[sunflower conjecture]].<ref>[[Noga Alon|Alon]], Shpilka, Umans, [http://eccc.hpi-web.de/report/2011/067/ On Sunflowers and Matrix Multiplication]</ref>
=== Lower bounds for ω ===
There is a trivial lower bound of {{tmath|\omega \ge 2}}. Since any algorithm for multiplying two {{math|''n'' × ''n''}}-matrices has to process all {{math|2''n''<sup>2</sup>}} entries, there is a trivial asymptotic lower bound of {{math|Ω(''n''<sup>2</sup>)}} operations for any matrix multiplication algorithm. Thus {{tmath|2\le \omega < 2.373}}. It is unknown whether {{tmath|\omega > 2}}. The best known lower bound for matrix-multiplication complexity is {{math|Ω(''n''<sup>2</sup> log(''n''))}}, for bounded coefficient [[Arithmetic circuit complexity|arithmetic circuits]] over the real or complex numbers, and is due to [[Ran Raz]].<ref>{{cite journal | last1 = Raz | first1 = Ran | author-link = Ran Raz | year = 2002| title = On the complexity of matrix product | journal = Proceedings of the Thirty-fourth Annual ACM Symposium on Theory of Computing | pages = 144 | doi = 10.1145/509907.509932 | isbn = 1581134959 | s2cid = 9582328 }}</ref>
=== Rectangular matrix multiplication ===
<ref>{{Citation|last=Gall|first=Francois Le|title=Improved Rectangular Matrix Multiplication using Powers of the Coppersmith-Winograd Tensor|date=2018-01-01|url=https://epubs.siam.org/doi/10.1137/1.9781611975031.67|work=Proceedings of the 2018 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)|pages=1029–1046|series=Proceedings|publisher=Society for Industrial and Applied Mathematics|doi=10.1137/1.9781611975031.67|access-date=2021-05-23|last2=Urrutia|first2=Florent|arxiv=1708.05622}}</ref>
==Related complexities==
{{further|Computational complexity of mathematical operations#Matrix algebra}}
Problems that have the same asymptotic complexity as matrix multiplication include [[determinant]], [[matrix inversion]], [[Gaussian elimination]] (see next section). Problems with complexity that is expressible in terms of <math>\omega</math> include characteristic polynomial, eigenvalues (but not eigenvectors), [[Hermite normal form]], and [[Smith normal form]].{{citation needed|date=March 2018}}
Line 256 ⟶ 290:
* [[Computational complexity of mathematical operations]]
* [[CYK algorithm#Valiant's algorithm|CYK algorithm, §Valiant's algorithm]]
* [[Freivalds' algorithm]], a simple [[Monte Carlo algorithm]] that, given matrices {{mvar|A}}, {{mvar|B}} and {{mvar|C}}, verifies in {{math|Θ(''n''<sup>2</sup>)}} time if {{math|''AB'' {{=}} ''C''}}.
* [[Matrix chain multiplication]]
* [[Matrix multiplication]]
* [[Matrix multiplication algorithm]]
* [[Sparse matrix-vector multiplication]]
|