User:Fawly/Computational complexity of matrix multiplication: Difference between revisions

Content deleted Content added
refactoring
add table
Line 7:
The matrix multiplication [[algorithm]] that results of the definition requires, in the [[worst-case complexity|worst case]], {{tmath|n^3}} multiplications of scalars and {{tmath|(n-1)n^2}} additions for computing the product of two square {{math|''n''×''n''}} matrices. Its [[computational complexity]] is therefore {{tmath|O(n^3)}}, in a [[model of computation]] for which the scalar operations require a constant time (in practice, this is the case for [[floating point]] numbers, but not for integers).
 
{| class="wikitable"
Rather surprisingly, this complexity is not optimal, as shown in 1969 by [[Volker Strassen]], who provided an algorithm, now called [[Strassen's algorithm]], with a complexity of <math>O( n^{\log_{2}7}) \approx O(n^{2.8074}).</math><ref>
|+ Timeline of matrix multiplication exponent
! Year !! Bound on omega !! Authors
|-
| 1969 || 2.8074 || [[Volker Strassen|Strassen]]<ref>
{{cite journal
| doi=10.1007/BF02165411
Line 20 ⟶ 24:
| s2cid = 121656251
}}</ref>
|-
The exponent appearing in the complexity of matrix multiplication has been improved several times,<ref>
| 1978 || || Pan<ref>
{{cite book
| doi=10.1109/SFCS.1978.34
Line 30 ⟶ 35:
| date=Oct 1978
| s2cid= 14348408
}}</ref><ref>
|-
| || || Bini, Capovani, Romani<ref>
{{cite journal
| doi=10.1016/0020-0190(79)90113-3
Line 44 ⟶ 51:
| pages=234&ndash;235
| date=Jun 1979
}}</ref><ref>
|-
| || || Schönhage<ref>
{{cite journal
| doi=10.1137/0210032
Line 54 ⟶ 63:
| pages=434&ndash;455
| year=1981
}}</ref><ref>
|-
| || || Romani<ref>
{{cite journal
| doi=10.1137/0211020
Line 64 ⟶ 75:
| pages=263&ndash;267
| year=1982
}}</ref><ref>
|-
| 1981 || || Coppersmith, Winograd<ref>
{{cite book
| doi=10.1109/SFCS.1981.27
Line 74 ⟶ 87:
| year=1981
| s2cid=206558664
}}</ref><ref>
|-
| || || Strassen<ref>
{{cite book
| doi=10.1109/SFCS.1986.52
Line 84 ⟶ 99:
| s2cid=15077423
}}</ref>
|-
leading to the [[Coppersmith–Winograd algorithm]] with a complexity of {{math|''O''(''n''<sup>2.3755</sup>)}} (1990).<ref>
| 1990 || 2.3755 || Coppersmith, Winograd<ref>
{{cite journal
| url=https://www.sciencedirect.com/science/article/pii/S0747717108800132
Line 96 ⟶ 112:
| date=Mar 1990
| doi=10.1016/S0747-7171(08)80013-2
}}</ref><ref name="Williams.2014">
{{cite report
| last=Williams
| first=Virginia Vassilevska
| author-link=Virginia Vassilevska Williams
| title=Multiplying matrices in <math>O(n^{2.373})</math> time
| institution=Stanford University
| type=Technical Report
| url=http://www.cs.stanford.edu/~virgi/matrixmult-f.pdf
}}</ref>
|-
This algorithm has been slightly improved in 2010 by Stothers to a complexity of {{math|''O''(''n''<sup>2.3737</sup>)}},<ref>
| 2010 || 2.3737 || Stothers<ref>
{{cite thesis
| type=Ph.D. thesis
Line 116 ⟶ 124:
| year=2010
}}</ref>
|-
in 2013 by [[Virginia Vassilevska Williams]] to {{math|''O''(''n''<sup>2.3729</sup>)}},<ref name="Williams.2014"/><ref>
| 2013 || 2.3729 || [[Virginia Vassilevska Williams|Williams]]<ref>
{{cite book
| doi=10.1145/2213977.2214056
Line 127 ⟶ 136:
| pages=887&ndash;898
| year=2012
}}</ref><ref name="Williams.2014">
{{cite report
| last=Williams
| first=Virginia Vassilevska
| author-link=Virginia Vassilevska Williams
| title=Multiplying matrices in <math>O(n^{2.373})</math> time
| institution=Stanford University
| type=Technical Report
| url=http://www.cs.stanford.edu/~virgi/matrixmult-f.pdf
}}</ref>
|-
and in 2014 by François Le Gall to {{math|''O''(''n''<sup>2.3728639</sup>)}}.<ref name="LeGall2014">
| 2014 || 2.3728639 || Le Gall<ref name="LeGall2014">
{{Citation
| last1=Le Gall
Line 141 ⟶ 160:
| bibcode=2014arXiv1401.7714L
}}</ref>
|-
This was further refined in 2020 by Josh Alman and Virginia Vassilevska Williams to a final (up to date) complexity of {{math|''O''(''n''<sup>2.3728596</sup>)}}.<ref name="Alman2020">
| 2020 || 2.3728596 || Alman, Williams<ref name="Alman2020">
{{Citation
| last1=Alman
Line 153 ⟶ 173:
|url=https://www.siam.org/conferences/cm/program/accepted-papers/soda21-accepted-papers
}}</ref>
|}
 
Rather surprisingly, this complexity is not optimal, as shown in 1969 by [[Volker Strassen]], who provided an algorithm, now called [[Strassen's algorithm]], with a complexity of <math>O( n^{\log_{2}7}) \approx O(n^{2.8074}).</math><ref>
The exponent appearing in the complexity of matrix multiplication has been improved several times,<ref>
leading to the [[Coppersmith–Winograd algorithm]] with a complexity of {{math|''O''(''n''<sup>2.3755</sup>)}} (1990).<ref>
This algorithm has been slightly improved in 2010 by Stothers to a complexity of {{math|''O''(''n''<sup>2.3737</sup>)}},<ref>
in 2013 by [[Virginia Vassilevska Williams]] to {{math|''O''(''n''<sup>2.3729</sup>)}},<ref name="Williams.2014"/><ref>
and in 2014 by François Le Gall to {{math|''O''(''n''<sup>2.3728639</sup>)}}.<ref name="LeGall2014">
This was further refined in 2020 by Josh Alman and Virginia Vassilevska Williams to a final (up to date) complexity of {{math|''O''(''n''<sup>2.3728596</sup>)}}.<ref name="Alman2020">
 
The [[greatest lower bound]] for the exponent of matrix multiplication algorithm is generally called {{tmath|\omega}}. It is trivially true that {{tmath|\omega \ge 2}}, because one must read the {{tmath|n^2}} elements of a matrix in order to multiply it with another matrix. Thus {{tmath|2\le \omega < 2.373}}. It is unknown whether {{tmath|\omega > 2}}. The largest known lower bound for matrix-multiplication complexity is {{math|Ω(''n''<sup>2</sup> log(''n''))}}, for a restricted kind of [[Arithmetic circuit complexity|arithmetic circuits]], and is due to [[Ran Raz]].<ref>