Content deleted Content added
expand on deepmind alphatensor |
moved alphatensor from lead; seems too early to say whether this result is significant enough for this |
||
Line 3:
Directly applying the mathematical definition of matrix multiplication gives an algorithm that [[Analysis of algorithms|takes time]] on the order of {{math|''n''<sup>3</sup>}} [[Field (mathematics)|field]] operations to multiply two {{math|''n'' × ''n''}} matrices over that field ({{math|Θ(''n''<sup>3</sup>)}} in [[big O notation]]). Better asymptotic bounds on the time required to multiply matrices have been known since the [[Strassen algorithm|Strassen's algorithm]] in the 1960s, but the optimal time (that is, the [[computational complexity of matrix multiplication]]) remains unknown. {{As of|2020|12}}, the matrix multiplication algorithm with best asymptotic complexity runs in {{math|O(''n''<sup>2.3728596</sup>)}} time, given by Josh Alman and [[Virginia Vassilevska Williams]].<ref name="aw20">{{cite conference | last1=Alman | first1=Josh | last2=Williams | first2=Virginia Vassilevska |title=A Refined Laser Method and Faster Matrix Multiplication | year = 2020 | arxiv=2010.05846 |book-title = 32nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2021) |url=https://www.siam.org/conferences/cm/program/accepted-papers/soda21-accepted-papers |doi=10.1137/1.9781611976465.32 }}</ref><ref>{{Cite web|last=Hartnett|first=Kevin|title=Matrix Multiplication Inches Closer to Mythic Goal|url=https://www.quantamagazine.org/mathematicians-inch-closer-to-matrix-multiplication-goal-20210323/|access-date=2021-04-01|website=Quanta Magazine|date=23 March 2021|language=en}}</ref> However, this algorithm is a [[galactic algorithm]] because of the large constants and cannot be realized practically.
In 2022, [[DeepMind]] introduced AlphaTensor, a [[neural network]] that used a single-player game analogy to invent thousands of matrix multiplication algorithms, including some previously discovered by humans.<ref>{{Cite web |title=Discovering novel algorithms with AlphaTensor |url=https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor |access-date=2022-11-01 |website=www.deepmind.com |language=en}}</ref> The best practical algorithm runs in O(n<sup>2.778</sup>) for matrices in the [[GF(2)|finite field <math>\mathbb Z/2\mathbb Z</math>]].<ref name="alphatensor">{{Cite journal |last1=Fawzi |first1=Alhussein |last2=Balog |first2=Matej |last3=Huang |first3=Aja |last4=Hubert |first4=Thomas |last5=Romera-Paredes |first5=Bernardino |last6=Barekatain |first6=Mohammadamin |last7=Novikov |first7=Alexander |last8=R. Ruiz |first8=Francisco J. |last9=Schrittwieser |first9=Julian |last10=Swirszcz |first10=Grzegorz |last11=Silver |first11=David |last12=Hassabis |first12=Demis |last13=Kohli |first13=Pushmeet |date=October 2022 |title=Discovering faster matrix multiplication algorithms with reinforcement learning |journal=Nature |volume=610 |issue=7930 |pages=47–53 |doi=10.1038/s41586-022-05172-4 |pmid=36198780 |pmc=9534758 |issn=1476-4687}}</ref>▼
==Iterative algorithm==
Line 160 ⟶ 158:
[[Freivalds' algorithm]] is a simple [[Monte Carlo algorithm]] that, given matrices {{mvar|A}}, {{mvar|B}} and {{mvar|C}}, verifies in {{math|Θ(''n''<sup>2</sup>)}} time if {{math|''AB'' {{=}} ''C''}}.
=== AlphaTensor ===
▲In 2022, [[DeepMind]] introduced AlphaTensor, a [[neural network]] that used a single-player game analogy to invent thousands of matrix multiplication algorithms, including some previously discovered by humans.<ref>{{Cite web |title=Discovering novel algorithms with AlphaTensor |url=https://www.deepmind.com/blog/discovering-novel-algorithms-with-alphatensor |access-date=2022-11-01 |website=www.deepmind.com |language=en}}</ref>
==Parallel and distributed algorithms==
|