Content deleted Content added
m Typo/general fixes, replaced: adressing → addressing |
|||
Line 85:
Strassen's algorithm improves on naive matrix multiplication through a [[Divide-and-conquer algorithm|divide-and-conquer]] approach. The key observation is that multiplying two {{math|2 × 2}} matrices can be done with only 7 multiplications, instead of the usual 8 (at the expense of several additional addition and subtraction operations). This means that, treating the input {{math|''n''×''n''}} matrices as block {{math|2 × 2}} matrices, the task of multiplying {{math|''n''×''n''}} matrices can be reduced to 7 subproblems of multiplying {{math|''n/2''×''n/2''}} matrices. Applying this recursively gives an algorithm needing <math>O( n^{\log_{2}7}) \approx O(n^{2.807})</math> field operations.
Unlike algorithms with faster asymptotic complexity, Strassen's algorithm is used in practice. The [[numerical stability]] is reduced compared to the naive algorithm,<ref>{{Citation | last1=Miller | first1=Webb | title=Computational complexity and numerical stability | citeseerx = 10.1.1.148.9947 | year=1975 | journal=SIAM News | volume=4 | issue=2 | pages=97–107 | doi=10.1137/0204009}}</ref> but it is faster in cases where {{math|''n'' > 100}} or so<ref name="skiena">{{cite book |first=Steven |last=Skiena |author-link=Steven Skiena |title=The Algorithm Design Manual |url=https://archive.org/details/algorithmdesignm00skie_772 |url-access=limited |publisher=Springer |year=2008 |pages=[https://archive.org/details/algorithmdesignm00skie_772/page/n56 45]–46, 401–403 |doi=10.1007/978-1-84800-070-4_4|chapter=Sorting and Searching |isbn=978-1-84800-069-8 }}</ref> and appears in several libraries, such as [[Basic Linear Algebra Subprograms|BLAS]].<ref>{{cite book |last1=Press |first1=William H. |last2=Flannery |first2=Brian P. |last3=Teukolsky |first3=Saul A. |author3-link=Saul Teukolsky |last4=Vetterling |first4=William T. |title=Numerical Recipes: The Art of Scientific Computing |publisher=[[Cambridge University Press]] |edition=3rd |isbn=978-0-521-88068-8 |year=2007 |page=[https://archive.org/details/numericalrecipes00pres_033/page/n131 108]|title-link=Numerical Recipes }}</ref> Fast matrix multiplication algorithms cannot achieve ''component-wise stability'', but some can be shown to exhibit ''norm-wise stability''.<ref name="bdl16">{{Citation | last1=Ballard | first1=Grey | last2=Benson | first2=Austin R. | last3=Druinsky | first3=Alex | last4=Lipshitz | first4=Benjamin | last5=Schwartz | first5=Oded | title=Improving the numerical stability of fast matrix multiplication | year=2016 | journal=SIAM Journal on Matrix Analysis and Applications | volume=37 | issue=4 | pages=
== Matrix multiplication exponent ==
Line 244:
Using a naive lower bound and schoolbook matrix multiplication for the upper bound, one can straightforwardly conclude that {{math|2 ≤ ω ≤ 3}}. Whether {{math|1=ω = 2}} is a major open question in [[theoretical computer science]], and there is a line of research developing matrix multiplication algorithms to get improved bounds on {{math|ω}}.
The previously best bound on {{math|ω}} is {{math|ω < 2.3728596}}, by Josh Alman and [[Virginia Vassilevska Williams]].<ref name="aw20"/> This algorithm, like all other recent algorithms in this line of research, uses the ''laser method'', a generalization of the Coppersmith–Winograd algorithm, which was given by [[Don Coppersmith]] and [[Shmuel Winograd]] in 1990 and was the best matrix multiplication algorithm until 2010.<ref name="coppersmith">{{Citation|doi=10.1016/S0747-7171(08)80013-2 |title=Matrix multiplication via arithmetic progressions |url=http://www.cs.umd.edu/~gasarch/TOPICS/ramsey/matrixmult.pdf |year=1990 |last1=Coppersmith |first1=Don |last2=Winograd |first2=Shmuel |journal=Journal of Symbolic Computation |volume=9|issue=3|pages=251|doi-access=free }}</ref> The conceptual idea of these algorithms are similar to Strassen's algorithm: a way is devised for multiplying two {{math|''k'' × ''k''}}-matrices with fewer than {{math|''k''<sup>3</sup>}} multiplications, and this technique is applied recursively. The laser method has limitations to its power, and cannot be used to show that {{math|ω < 2.3725}}.<ref name="afl14">{{Cite journal|last1=Ambainis|first1=Andris|last2=Filmus|first2=Yuval|last3=Le Gall|first3=François|date=2015-06-14|title=Fast Matrix Multiplication: Limitations of the Coppersmith-Winograd Method|url=https://doi.org/10.1145/2746539.2746554|journal=Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing|series=STOC '15|___location=Portland, Oregon, USA|publisher=Association for Computing Machinery|pages=585–593|doi=10.1145/2746539.2746554|arxiv=1411.5414 |isbn=978-1-4503-3536-2|s2cid=8332797 }}</ref> Duan, Wu and Zhou identify a source of potential optimization in the laser method termed ''combination loss''.<ref name="dwz22"/> They find a way to exploit this to devise a variant of the laser method which they use to show {{math|ω < 2.37188}} breaking the barrier for any conventional laser method algorithm. With this newer approach annother bound<ref name="afl14"/> applies according to Duan, Wu and Zhou and showing {{math|ω < 2.3078}} will not be possible only
=== Group theory reformulation of matrix multiplication algorithms ===
|