Computational complexity of matrix multiplication: Difference between revisions

Content deleted Content added
Tpreu (talk | contribs)
Citation bot (talk | contribs)
Alter: title, template type. Add: date, chapter-url, chapter. Removed or converted URL. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox3 | #UCB_webform_linked 379/2306
Line 248:
Using a naive lower bound and schoolbook matrix multiplication for the upper bound, one can straightforwardly conclude that {{math|2 ≤ ω ≤ 3}}. Whether {{math|1=ω = 2}} is a major open question in [[theoretical computer science]], and there is a line of research developing matrix multiplication algorithms to get improved bounds on {{math|ω}}.
 
All recent algorithms in this line of research use the ''laser method'', a generalization of the Coppersmith–Winograd algorithm, which was given by [[Don Coppersmith]] and [[Shmuel Winograd]] in 1990 and was the best matrix multiplication algorithm until 2010.<ref name="coppersmith">{{cite journal|doi=10.1016/S0747-7171(08)80013-2 |title=Matrix multiplication via arithmetic progressions |url=http://www.cs.umd.edu/~gasarch/TOPICS/ramsey/matrixmult.pdf |year=1990 |last1=Coppersmith |first1=Don |last2=Winograd |first2=Shmuel |journal=Journal of Symbolic Computation |volume=9|issue=3|pages=251|doi-access=free }}</ref> The conceptual idea of these algorithms is similar to Strassen's algorithm: a way is devised for multiplying two {{math|''k'' × ''k''}}-matrices with fewer than {{math|''k''<sup>3</sup>}} multiplications, and this technique is applied recursively. The laser method has limitations to its power, and cannot be used to show that {{math|ω < 2.3725}}.<ref name="afl14">{{Cite journalbook|last1=Ambainis|first1=Andris|last2=Filmus|first2=Yuval|last3=Le Gall|first3=François|datetitle=2015Proceedings of the forty-06-14seventh annual ACM symposium on Theory of Computing |titlechapter=Fast Matrix Multiplication: Limitations of the Coppersmith|date=2015-Winograd Method06-14|chapter-url=https://doi.org/10.1145/2746539.2746554|journal=Proceedings of the Forty-Seventh Annual ACM Symposium on Theory of Computing|series=STOC '15|___location=Portland, Oregon, USA|publisher=Association for Computing Machinery|pages=585–593|doi=10.1145/2746539.2746554|arxiv=1411.5414 |isbn=978-1-4503-3536-2|s2cid=8332797 }}</ref> Duan, Wu and Zhou identify a source of potential optimization in the laser method termed ''combination loss''.<ref name="dwz22"/> They find a way to exploit this to devise a variant of the laser method which they use to show {{math|ω < 2.37188}}, breaking the barrier for any conventional laser method algorithm. With this newer approach another bound<ref name="afl14"/> applies according to Duan, Wu and Zhou and showing {{math|ω < 2.3078}} will not be possible only addressing combination loss in the laser method.
 
=== Group theory reformulation of matrix multiplication algorithms ===
Line 256:
=== Lower bounds for ω ===
 
There is a trivial lower bound of {{tmath|\omega \ge 2}}. Since any algorithm for multiplying two {{math|''n'' × ''n''}}-matrices has to process all {{math|2''n''<sup>2</sup>}} entries, there is a trivial asymptotic lower bound of {{math|Ω(''n''<sup>2</sup>)}} operations for any matrix multiplication algorithm. Thus {{tmath|2\le \omega < 2.37188}}. It is unknown whether {{tmath|\omega > 2}}. The best known lower bound for matrix-multiplication complexity is {{math|Ω(''n''<sup>2</sup> log(''n''))}}, for bounded coefficient [[Arithmetic circuit complexity|arithmetic circuits]] over the real or complex numbers, and is due to [[Ran Raz]].<ref>{{cite journalbook | last1 = Raz | first1 = Ran | author-linktitle = RanProceedings Razof |the yearthiry-fourth =annual ACM symposium on Theory of computing 2002| titlechapter = On the complexity of matrix product | journaldate = Proceedings2002 of| the Thirtyauthor-Fourthlink Annual= ACMRan SymposiumRaz on| Theoryyear of Computing= 2002| pages = 144–151 | doi = 10.1145/509907.509932 | isbn = 1581134959 | s2cid = 9582328 }}</ref>
 
The exponent ω is defined to be a [[Accumulation point|limit point]], in that it is the infimum of the exponent over all matrix multiplication algorithm. It is known that this limit point is not achieved. In other words, under the model of computation typically studied, there is no matrix multiplication algorithm that uses precisely {{math|O(''n''<sup>ω</sup>)}} operations; there must be an additional factor of {{math|''n''<sup>o(1)</sup>}}.<ref name="CW.81"/>