Computational complexity of matrix multiplication: Difference between revisions

Content deleted Content added
Updated the article to reflect that the pre-print authored by Williams, Xu, Xu, and Zhou has been officially published in a peer-reviewed conference in 2024.
Matrix multiplication exponent: Mentioned the name of the authors of one of the cited papers, Ambinis, Yuval and Le Gall plus some minors aclarations to the bounds they give.
Line 237:
Using a naive lower bound and schoolbook matrix multiplication for the upper bound, one can straightforwardly conclude that {{math|2 ≤ ω ≤ 3}}. Whether {{math|1=ω = 2}} is a major open question in [[theoretical computer science]], and there is a line of research developing matrix multiplication algorithms to get improved bounds on {{math|ω}}.
 
All recent algorithms in this line of research use the ''laser method'', a generalization of the Coppersmith–Winograd algorithm, which was given by [[Don Coppersmith]] and [[Shmuel Winograd]] in 1990 and was the best matrix multiplication algorithm until 2010.<ref name="coppersmith">{{cite journal|doi=10.1016/S0747-7171(08)80013-2 |title=Matrix multiplication via arithmetic progressions |url=http://www.cs.umd.edu/~gasarch/TOPICS/ramsey/matrixmult.pdf |year=1990 |last1=Coppersmith |first1=Don |last2=Winograd |first2=Shmuel |journal=Journal of Symbolic Computation |volume=9|issue=3|pages=251|doi-access=free }}</ref> The conceptual idea of these algorithms is similar to Strassen's algorithm: a way is devised for multiplying two {{math|''k'' × ''k''}}-matrices with fewer than {{math|''k''<sup>3</sup>}} multiplications, and this technique is applied recursively. The laser method has limitations to its power, [[Andris Ambainis|Ambainis]], Filmus and [[Jean-François Le Gall|Le Gall]] prove that it cannot be used to show that {{math|ω < 2.3725}} by analyzing higher and higher tensor powers of a certain identity of Coppersmith and Winograd and neither {{math|ω < 2.3078}} for a wide class of variants of this approach.<ref name="afl14afl142">{{Cite book |last1=Ambainis |first1=Andris|last2=Filmus|first2=Yuval|last3=Le Gall|first3=François|title=Proceedings of the forty-seventh annual ACM symposium on Theory of Computing |chapterlast2=FastFilmus Matrix|first2=Yuval Multiplication|last3=Le Gall |first3=François |date=2015-06-14 |chapter-urlpublisher=https://doi.org/10.1145/2746539.2746554Association for Computing Machinery |isbn=978-1-4503-3536-2 |series=STOC '15 |___location=Portland, Oregon, USA |publisherpages=Association585–593 for|chapter=Fast ComputingMatrix Multiplication Machinery|pagesdoi=585–59310.1145/2746539.2746554 |doichapter-url=https://doi.org/10.1145/2746539.2746554 |arxiv=1411.5414 |isbn=978-1-4503-3536-2|s2cid=8332797 }}</ref> In 2022 Duan, Wu and Zhou identifydevised a sourcevariant ofbreaking potentialthe optimizationfirst inof the lasertwo methodbarrier termedwith ''combination{{math|ω loss''< 2.37188}}<ref name="dwz22" />, Theythey finddo aso wayby to exploit this to deviseidentifying a variantsource of thepotential laseroptimization method which they use to show {{math|ω < 2.37188}}, breakingin the barrier for any conventional laser method algorithm.termed With''combination thisloss'' newerfor approachwhich anotherthey bound<refcompensate name="afl14"/>using appliesan accordingasymmetric toversion Duan,of Wuthe andhashing Zhou and showing {{math|ω < 2.3078}} will not be possible only addressing combination lossmethod in the laserCoppersmith–Winograd methodalgorithm.
 
=== Group theory reformulation of matrix multiplication algorithms ===