User:Fawly/Computational complexity of matrix multiplication: Difference between revisions
Content deleted Content added
No edit summary |
No edit summary |
||
Line 31:
| url-status = dead
}}</ref><ref name="robinson">{{Citation | last=Robinson | first=Sara | title=Toward an Optimal Algorithm for Matrix Multiplication | url=https://archive.siam.org/pdf/news/174.pdf | date=November 2005 | journal=SIAM News | volume=38 | issue=9 | quote=Even if someone manages to prove one of the conjectures—thereby demonstrating that {{math|1=''ω'' = 2}}—the wreath product approach is unlikely to be applicable to the large matrix problems that arise in practice. [...] the input matrices must be astronomically large for the difference in time to be apparent.}}</ref>
== Matrix multiplication ==
{{Main|Matrix multiplication}}
=== Schoolbook algorithm ===
{{For|implementation techniques (in particular parallel and distributed algorithms)|Matrix multiplication algorithm}}
The matrix multiplication [[algorithm]] that results of the definition requires, in the [[worst-case complexity|worst case]], {{tmath|n^3}} multiplications of scalars and {{tmath|(n-1)n^2}} additions for computing the product of two square {{math|''n''×''n''}} matrices. Its [[computational complexity]] is therefore {{tmath|O(n^3)}}, in a [[model of computation]] for which the scalar operations require a constant time (in practice, this is the case for [[floating point]] numbers, but not for integers).▼
=== Strassen's algorithm ===
{{Main|Strassen algorithm}}
Strassen's algorithm is based on a way of multiplying two {{math|2 × 2}}-matrices which requires only 7 multiplications (instead of the usual 8), at the expense of several additional addition and subtraction operations. Applying this recursively gives an algorithm with a multiplicative cost of <math>O( n^{\log_{2}7}) \approx O(n^{2.807})</math>. Strassen's algorithm is more complex, and the [[numerical stability]] is reduced compared to the naïve algorithm,<ref>{{Citation | last1=Miller | first1=Webb | title=Computational complexity and numerical stability | citeseerx = 10.1.1.148.9947 | year=1975 | journal=SIAM News | volume=4 | issue=2 | pages=97–107 | doi=10.1137/0204009}}</ref> but it is faster in cases where {{math|''n'' > 100}} or so<ref name="skiena"/> and appears in several libraries, such as [[Basic Linear Algebra Subprograms|BLAS]].<ref>{{cite book |last1=Press |first1=William H. |last2=Flannery |first2=Brian P. |last3=Teukolsky |first3=Saul A. |author3-link=Saul Teukolsky |last4=Vetterling |first4=William T. |title=Numerical Recipes: The Art of Scientific Computing |publisher=[[Cambridge University Press]] |edition=3rd |isbn=978-0-521-88068-8 |year=2007 |page=[https://archive.org/details/numericalrecipes00pres_033/page/n131 108]|title-link=Numerical Recipes }}</ref> It is very useful for large matrices over exact domains such as [[finite field]]s, where numerical stability is not an issue.▼
== The matrix multiplication exponent ==
Line 191 ⟶ 203:
| 2020 || 2.3728596 || Alman, [[Virginia Vassilevska Williams|Williams]]<ref name="aw20"/>
|}
▲The matrix multiplication [[algorithm]] that results of the definition requires, in the [[worst-case complexity|worst case]], {{tmath|n^3}} multiplications of scalars and {{tmath|(n-1)n^2}} additions for computing the product of two square {{math|''n''×''n''}} matrices. Its [[computational complexity]] is therefore {{tmath|O(n^3)}}, in a [[model of computation]] for which the scalar operations require a constant time (in practice, this is the case for [[floating point]] numbers, but not for integers).
It is an open question in [[theoretical computer science]] how well Strassen's algorithm can be improved in terms of [[Time complexity|asymptotic complexity]]. The '''matrix multiplication exponent''', usually denoted {{math|ω}}, is the smallest real number for which any <math>n\times n</math> matrix over a field can be multiplied together using <math>n^{\omega + o(1)}</math> field operations. This notation is commonly used in [[algorithm]]s research, algorithms using matrix multiplication as a subroutine have meaningful bounds on running time regardless of the true value of {{math|ω}}.
▲Strassen's algorithm is based on a way of multiplying two {{math|2 × 2}}-matrices which requires only 7 multiplications (instead of the usual 8), at the expense of several additional addition and subtraction operations. Applying this recursively gives an algorithm with a multiplicative cost of <math>O( n^{\log_{2}7}) \approx O(n^{2.807})</math>. Strassen's algorithm is more complex, and the [[numerical stability]] is reduced compared to the naïve algorithm,<ref>{{Citation | last1=Miller | first1=Webb | title=Computational complexity and numerical stability | citeseerx = 10.1.1.148.9947 | year=1975 | journal=SIAM News | volume=4 | issue=2 | pages=97–107 | doi=10.1137/0204009}}</ref> but it is faster in cases where {{math|''n'' > 100}} or so<ref name="skiena"/> and appears in several libraries, such as [[Basic Linear Algebra Subprograms|BLAS]].<ref>{{cite book |last1=Press |first1=William H. |last2=Flannery |first2=Brian P. |last3=Teukolsky |first3=Saul A. |author3-link=Saul Teukolsky |last4=Vetterling |first4=William T. |title=Numerical Recipes: The Art of Scientific Computing |publisher=[[Cambridge University Press]] |edition=3rd |isbn=978-0-521-88068-8 |year=2007 |page=[https://archive.org/details/numericalrecipes00pres_033/page/n131 108]|title-link=Numerical Recipes }}</ref> It is very useful for large matrices over exact domains such as [[finite field]]s, where numerical stability is not an issue.
The current best bound on {{math|ω}} is {{math|ω < 2.3728596}}, by Josh Alman and [[Virginia Vassilevska Williams]].<ref name="aw20"/> This algorithm, like all other recent algorithms in this line of research, is a generalization of the Coppersmith–Winograd algorithm, which was given by [[Don Coppersmith]] and [[Shmuel Winograd]] in 1990, was the best matrix multiplication algorithm until 2010, and has an asymptotic complexity of {{math|''O''(''n''<sup>2.375477</sup>)}}.<ref name="coppersmith">{{Citation|doi=10.1016/S0747-7171(08)80013-2 |title=Matrix multiplication via arithmetic progressions |url=http://www.cs.umd.edu/~gasarch/TOPICS/ramsey/matrixmult.pdf |year=1990 |last1=Coppersmith |first1=Don |last2=Winograd |first2=Shmuel |journal=Journal of Symbolic Computation |volume=9|issue=3|pages=251}}</ref> The conceptual idea of these algorithms are similar to Strassen's algorithm: a way is devised for multiplying two {{math|''k'' × ''k''}}-matrices with fewer than {{math|''k''<sup>3</sup>}} multiplications, and this technique is applied recursively.
Line 209 ⟶ 217:
==Related complexities==
{{further|Computational complexity of mathematical operations#Matrix algebra}}
Problems that have the same asymptotic complexity as matrix multiplication include [[determinant]], [[matrix inversion]], [[Gaussian elimination]] (see next section). Problems with complexity that is expressible in terms of <math>\omega</math> include characteristic polynomial, eigenvalues (but not eigenvectors), [[Hermite normal form]], and [[Smith normal form]].{{citation needed|date=March 2018}}
=== Rectangular matrix multiplication ===
<ref>{{Citation|last=Gall|first=Francois Le|title=Improved Rectangular Matrix Multiplication using Powers of the Coppersmith-Winograd Tensor|date=2018-01-01|url=https://epubs.siam.org/doi/10.1137/1.9781611975031.67|work=Proceedings of the 2018 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)|pages=1029–1046|series=Proceedings|publisher=Society for Industrial and Applied Mathematics|doi=10.1137/1.9781611975031.67|access-date=2021-05-23|last2=Urrutia|first2=Florent|arxiv=1708.05622}}</ref>
===Matrix inversion, determinant and Gaussian elimination===
|