User:Fawly/Computational complexity of matrix multiplication: Difference between revisions

Content deleted Content added
No edit summary
No edit summary
 
(15 intermediate revisions by the same user not shown)
Line 1:
In [[theoretical computer science]], an active area of research is determining [[Analysis of algorithms|how efficient]] the operation of [[matrix multiplication]] can be performed. [[Matrix multiplication algorithm]]s are a central subroutine in theoretical and [[numerical algorithm|numerical]] algorithms for [[numerical linear algebra]] and [[optimization]], so finding the right amount of time it should take is of major practical relevance.
 
Directly applying the mathematical definition of matrix multiplication gives an algorithm that on the order of {{math|''n''<sup>3</sup>}} [[Field (mathematics)|field]] operations to multiply two {{math|''n'' × ''n''}} matrices over that field ({{math|Θ(''n''<sup>3</sup>)}} in [[big O notation]]). Surprisingly, algorithms exist that provide better running times than this straightforward "schoolbook algorithm". The first to be discovered was [[Strassen algorithm|Strassen's algorithm]], devised by [[Volker Strassen]] in 1969 and often referred to as "fast matrix multiplication". It is still unknown what theThe optimal number of field operations is requiredneeded to multiply two square {{math|''n'' × ''n''}} matrices [[big O notation|up to constant factors]] is still unknown. This is a major open question in [[theoretical computer science]].

{{As of|2020|12}}, the matrix multiplication algorithm with best [[Time complexity|asymptotic complexity]] runs in {{math|O(''n''<sup>2.3728596</sup>)}} time, given by Josh Alman and [[Virginia Vassilevska Williams]],<ref name="aw20">
{{Citation
| last1=Alman
Line 12 ⟶ 14:
| title = 32nd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2021)
|url=https://www.siam.org/conferences/cm/program/accepted-papers/soda21-accepted-papers
}}</ref><ref>{{Cite web|last=Hartnett|first=Kevin|title=Matrix Multiplication Inches Closer to Mythic Goal|url=https://www.quantamagazine.org/mathematicians-inch-closer-to-matrix-multiplication-goal-20210323/|access-date=2021-04-01|website=Quanta Magazine|language=en}}</ref>. howeverHowever, this algorithmand similar improvements to Strassen are not used in practice, because constant coefficient hidden by the [[Big O notation]] is aso large that these [[galactic algorithm]]s becauseare ofonly theworthwhile largefor constantsmatrices andthat are too large to cannothandle beon realizedpresent-day practicallycomputers.<ref>{{citation
| last = Iliopoulos
| first = Costas S.
| doi = 10.1137/0218045
| issue = 4
| journal = SIAM Journal on Computing
| mr = 1004789
| quote = The Coppersmith–Winograd algorithm is not practical, due to the very large hidden constant in the upper bound on the number of multiplications required.
| pages = 658–669
| title = Worst-case complexity bounds on algorithms for computing the canonical structure of finite abelian groups and the Hermite and Smith normal forms of an integer matrix
| url = http://www.williamstein.org/home/wstein/www/home/pernet/Papers/Hermite/Iliopoulos88.pdf
| volume = 18
| year = 1989
| citeseerx = 10.1.1.531.9309
| access-date = 2015-01-16
| archive-url = https://web.archive.org/web/20140305182030/http://www.williamstein.org/home/wstein/www/home/pernet/Papers/Hermite/Iliopoulos88.pdf
| archive-date = 2014-03-05
| url-status = dead
}}</ref><ref name="robinson">{{Citation | last=Robinson | first=Sara | title=Toward an Optimal Algorithm for Matrix Multiplication | url=https://archive.siam.org/pdf/news/174.pdf | date=November 2005 | journal=SIAM News | volume=38 | issue=9 | quote=Even if someone manages to prove one of the conjectures—thereby demonstrating that {{math|1=''ω'' = 2}}—the wreath product approach is unlikely to be applicable to the large matrix problems that arise in practice. [...] the input matrices must be astronomically large for the difference in time to be apparent.}}</ref>
 
== Simple algorithms ==
 
If ''A'', ''B'' are {{math|''n'' × ''n''}} matrices over a field, then their product ''AB'' is also an {{math|''n'' × ''n''}} matrix over that field, defined entrywise as
:<math>
(AB)_{ij} = \sum_{k = 1}^n A_{ik} B_{kj}.
</math>
 
=== Schoolbook algorithm ===
{{For|implementation techniques (in particular parallel and distributed algorithms)|Matrix multiplication algorithm}}
 
The simplest approach to computing the product of two {{math|''n'' × ''n''}} matrices ''A'' and ''B'' is to compute the arithmetic expressions coming from the definition of matrix multiplication. In pseudocode:
 
input is A and B, both n by n matrices
initialize C to be an n by n matrix of all zeros
for i from 1 to n:
for j from 1 to n:
for k from 1 to n:
C[i][j] = C[i][j] + A[i][k]*B[k][j]
output C (as A*B)
 
The matrix multiplicationThis [[algorithm]] that results of the definition requires, in the [[worst-case complexity|worst case]], {{tmath|n^3}} multiplications of scalars and {{tmath|(n^3 -1) n^2}} additions for computing the product of two square {{math|''n''×''n''}} matrices. Its [[computational complexity]] is therefore {{tmath|O(n^3)}}, in a [[model of computation]] forwhere whichfield theoperations scalar(addition operationsand requiremultiplication) atake constant time (in practice, this is the case for [[floating point]] numbers, but not necessarily for integers).
 
=== Strassen's algorithm ===
[[Freivalds' algorithm]] is a simple [[Monte Carlo algorithm]] that, given matrices {{mvar|A}}, {{mvar|B}} and {{mvar|C}}, verifies in {{math|Θ(''n''<sup>2</sup>)}} time if {{math|''AB'' {{=}} ''C''}}.
{{Main|Strassen algorithm}}
ItStrassen's algorithm is based on a way of multiplying two {{math|2 × 2}}-matrices which requires only 7 multiplications (instead of the usual 8), at the expense of several additional addition and subtraction operations. Applying this recursively gives an algorithm with a multiplicative cost of <math>O( n^{\log_{2}7}) \approx O(n^{2.807})</math>. Strassen's algorithm is more complex, and the [[numerical stability]] is reduced compared to the naïve algorithm,<ref>{{Citation | last1=Miller | first1=Webb | title=Computational complexity and numerical stability | citeseerx = 10.1.1.148.9947 | year=1975 | journal=SIAM News | volume=4 | issue=2 | pages=97–107 | doi=10.1137/0204009}}</ref> but it is faster in cases where {{math|''n'' > 100}} or so<ref name="skiena"/> and appears in several libraries, such as [[Basic Linear Algebra Subprograms|BLAS]].<ref>{{cite book |last1=Press |first1=William H. |last2=Flannery |first2=Brian P. |last3=Teukolsky |first3=Saul A. |author3-link=Saul Teukolsky |last4=Vetterling |first4=William T. |title=Numerical Recipes: The Art of Scientific Computing |publisher=[[Cambridge University Press]] |edition=3rd |isbn=978-0-521-88068-8 |year=2007 |page=[https://archive.org/details/numericalrecipes00pres_033/page/n131 108]|title-link=Numerical Recipes }}</ref> It is very useful for large matrices over exact domains such as [[finite field]]s, where numerical stability is not an issue.
 
== The matrix multiplication exponent ==
Line 176 ⟶ 220:
|}
 
The '''matrix multiplication exponent''', usually denoted {{math|ω}}, is the smallest real number for which any <math>n\times n</math> matrix over a field can be multiplied together using <math>n^{\omega + o(1)}</math> field operations. This notation is commonly used in [[algorithm]]s research, algorithms using matrix multiplication as a subroutine have meaningful bounds on running time regardless of the true value of {{math|ω}}.
It is based on a way of multiplying two {{math|2 × 2}}-matrices which requires only 7 multiplications (instead of the usual 8), at the expense of several additional addition and subtraction operations. Applying this recursively gives an algorithm with a multiplicative cost of <math>O( n^{\log_{2}7}) \approx O(n^{2.807})</math>. Strassen's algorithm is more complex, and the [[numerical stability]] is reduced compared to the naïve algorithm,<ref>{{Citation | last1=Miller | first1=Webb | title=Computational complexity and numerical stability | citeseerx = 10.1.1.148.9947 | year=1975 | journal=SIAM News | volume=4 | issue=2 | pages=97–107 | doi=10.1137/0204009}}</ref> but it is faster in cases where {{math|''n'' > 100}} or so<ref name="skiena"/> and appears in several libraries, such as [[Basic Linear Algebra Subprograms|BLAS]].<ref>{{cite book |last1=Press |first1=William H. |last2=Flannery |first2=Brian P. |last3=Teukolsky |first3=Saul A. |author3-link=Saul Teukolsky |last4=Vetterling |first4=William T. |title=Numerical Recipes: The Art of Scientific Computing |publisher=[[Cambridge University Press]] |edition=3rd |isbn=978-0-521-88068-8 |year=2007 |page=[https://archive.org/details/numericalrecipes00pres_033/page/n131 108]|title-link=Numerical Recipes }}</ref> It is very useful for large matrices over exact domains such as [[finite field]]s, where numerical stability is not an issue.
 
Using a naive lower bound and schoolbook matrix multiplication for the upper bound, one can straightforwardly conclude that {{math|2 ≤ ω ≤ 3}}. Whether {{math|1=ω = 2}} is a major open question in [[theoretical computer science]], and there is a line of research developing matrix multiplication algorithms to get improved bounds on {{math|ω}}.
The matrix multiplication [[algorithm]] that results of the definition requires, in the [[worst-case complexity|worst case]], {{tmath|n^3}} multiplications of scalars and {{tmath|(n-1)n^2}} additions for computing the product of two square {{math|''n''×''n''}} matrices. Its [[computational complexity]] is therefore {{tmath|O(n^3)}}, in a [[model of computation]] for which the scalar operations require a constant time (in practice, this is the case for [[floating point]] numbers, but not for integers).
 
The current best bound on <{{math>\omega</math>|ω}} is <{{math>\omega < 2.3728596</math>}}, by Josh Alman and [[Virginia Vassilevska Williams]].<ref name="aw20"/> This algorithm, like all other recent algorithms in this line of research, isuses the ''laser method'', a generalization of the Coppersmith–Winograd algorithm, which was given by [[Don Coppersmith]] and [[Shmuel Winograd]] in 1990, was the best matrix multiplication algorithm until 2010, and has an asymptotic complexity of {{math|''O''(''n''<sup>2.375477</sup>)}}.<ref name="coppersmith">{{Citation|doi=10.1016/S0747-7171(08)80013-2 |title=Matrix multiplication via arithmetic progressions |url=http://www.cs.umd.edu/~gasarch/TOPICS/ramsey/matrixmult.pdf |year=1990 |last1=Coppersmith |first1=Don |last2=Winograd |first2=Shmuel |journal=Journal of Symbolic Computation |volume=9|issue=3|pages=251}}</ref> The conceptual idea of these algorithms are similar to Strassen's algorithm: a way is devised for multiplying two {{math|''k'' × ''k''}}-matrices with fewer than {{math|''k''<sup>3</sup>}} multiplications, and this technique is applied recursively. However,The thelaser constantmethod coefficienthas hiddenlimitations byto theits [[Bigpower, Oand notation]]cannot isbe soused largeto show that these{{math|ω algorithms< are2.3725}}.<ref>{{Cite onlyjournal|last=Ambainis|first=Andris|last2=Filmus|first2=Yuval|last3=Le worthwhileGall|first3=François|date=2015-06-14|title=Fast forMatrix matricesMultiplication: thatLimitations areof toothe largeCoppersmith-Winograd toMethod|url=https://doi.org/10.1145/2746539.2746554|journal=Proceedings handleof onthe presentforty-dayseventh computersannual ACM symposium on Theory of Computing|series=STOC '15|___location=Portland, Oregon, USA|publisher=Association for Computing Machinery|pages=585–593|doi=10.1145/2746539.2746554|isbn=978-1-4503-3536-2}}</ref>{{citation
The [[greatest lower bound]] for the exponent of matrix multiplication algorithm is generally called {{tmath|\omega}}. It is trivially true that {{tmath|\omega \ge 2}}, because one must read the {{tmath|n^2}} elements of a matrix in order to multiply it with another matrix. Thus {{tmath|2\le \omega < 2.373}}. It is unknown whether {{tmath|\omega > 2}}. The largest known lower bound for matrix-multiplication complexity is {{math|Ω(''n''<sup>2</sup> log(''n''))}}, for a restricted kind of [[Arithmetic circuit complexity|arithmetic circuits]], and is due to [[Ran Raz]].<ref>
{{Cite journal
| last=Raz
| first=Ran
| date=January 2003
| title=On the Complexity of Matrix Product
| journal=SIAM Journal on Computing
| volume=32
| issue=5
| pages=1356–1369
| doi=10.1137/s0097539702402147
| issn=0097-5397
}}</ref>
 
=== Group theory reformulation of matrix multiplication algorithms ===
It is an open question in [[theoretical computer science]] how well Strassen's algorithm can be improved in terms of [[Time complexity|asymptotic complexity]]. The ''matrix multiplication exponent'', usually denoted <math>\omega</math>, is the smallest real number for which any <math>n\times n</math> matrix over a field can be multiplied together using <math>n^{\omega + o(1)}</math> field operations.
 
[[Henry Cohn]], [[Robert Kleinberg]], [[Balázs Szegedy]] and [[Chris Umans]] put methods such as the Strassen and Coppersmith–Winograd algorithms in an entirely different [[group theory|group-theoretic]] context, by utilising triples of subsets of finite groups which satisfy a disjointness property called the [[Triple product property|triple product property (TPP)]]. They also give conjectures that, if true, would imply that there are matrix multiplication algorithms with essentially quadratic complexity. This implies that the optimal exponent of matrix multiplication is 2, which most researchers believe is indeed the case.<ref name="robinson"/> One such conjecture is that families of [[wreath product]]s of [[Abelian group]]s with symmetric groups realise families of subset triples with a simultaneous version of the TPP.<ref>{{Cite book | last1 = Cohn | first1 = H. | last2 = Kleinberg | first2 = R. | last3 = Szegedy | first3 = B. | last4 = Umans | first4 = C. | chapter = Group-theoretic Algorithms for Matrix Multiplication | doi = 10.1109/SFCS.2005.39 | title = 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05) | pages = 379 | year = 2005 | isbn = 0-7695-2468-0 | s2cid = 41278294 | url = https://authors.library.caltech.edu/23966/ }}</ref><ref>Henry Cohn, Chris Umans. A Group-theoretic Approach to Fast Matrix Multiplication. {{arxiv|math.GR/0307321}}. ''Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science'', 11–14 October 2003, Cambridge, MA, IEEE Computer Society, pp.&nbsp;438–449.</ref> Several of their conjectures have since been disproven by Blasiak, Cohn, Church, Grochow, Naslund, Sawin, and Umans using the Slice Rank method.<ref>{{Cite book | last1 = Blasiak | first1 = J. | last2 = Cohn | first2 = H. | last3 = Church | first3 = T. | last4 = Grochow | first4 = J. | last5 = Naslund | first5= E. | last6 = Sawin | first6 = W. | last7=Umans | first7= C.| chapter= On cap sets and the group-theoretic approach to matrix multiplication | doi = 10.19086/da.1245 | title = Discrete Analysis | year = 2017 | s2cid = 9687868 | url = http://discreteanalysisjournal.com/article/1245-on-cap-sets-and-the-group-theoretic-approach-to-matrix-multiplication}}</ref> Further, Alon, Shpilka and [[Chris Umans]] have recently shown that some of these conjectures implying fast matrix multiplication are incompatible with another plausible conjecture, the [[sunflower conjecture]].<ref>[[Noga Alon|Alon]], Shpilka, Umans, [http://eccc.hpi-web.de/report/2011/067/ On Sunflowers and Matrix Multiplication]</ref>
The current best bound on <math>\omega</math> is <math>\omega < 2.3728596</math>, by Josh Alman and [[Virginia Vassilevska Williams]].<ref name="aw20"/> This algorithm, like all other recent algorithms in this line of research, is a generalization of the Coppersmith–Winograd algorithm, which was given by [[Don Coppersmith]] and [[Shmuel Winograd]] in 1990, was the best matrix multiplication algorithm until 2010, and has an asymptotic complexity of {{math|''O''(''n''<sup>2.375477</sup>)}}.<ref name="coppersmith">{{Citation|doi=10.1016/S0747-7171(08)80013-2 |title=Matrix multiplication via arithmetic progressions |url=http://www.cs.umd.edu/~gasarch/TOPICS/ramsey/matrixmult.pdf |year=1990 |last1=Coppersmith |first1=Don |last2=Winograd |first2=Shmuel |journal=Journal of Symbolic Computation |volume=9|issue=3|pages=251}}</ref> The conceptual idea of these algorithms are similar to Strassen's algorithm: a way is devised for multiplying two {{math|''k'' × ''k''}}-matrices with fewer than {{math|''k''<sup>3</sup>}} multiplications, and this technique is applied recursively. However, the constant coefficient hidden by the [[Big O notation]] is so large that these algorithms are only worthwhile for matrices that are too large to handle on present-day computers.<ref>{{citation
| last = Iliopoulos
| first = Costas S.
| doi = 10.1137/0218045
| issue = 4
| journal = SIAM Journal on Computing
| mr = 1004789
| quote = The Coppersmith–Winograd algorithm is not practical, due to the very large hidden constant in the upper bound on the number of multiplications required.
| pages = 658–669
| title = Worst-case complexity bounds on algorithms for computing the canonical structure of finite abelian groups and the Hermite and Smith normal forms of an integer matrix
| url = http://www.williamstein.org/home/wstein/www/home/pernet/Papers/Hermite/Iliopoulos88.pdf
| volume = 18
| year = 1989
| citeseerx = 10.1.1.531.9309
| access-date = 2015-01-16
| archive-url = https://web.archive.org/web/20140305182030/http://www.williamstein.org/home/wstein/www/home/pernet/Papers/Hermite/Iliopoulos88.pdf
| archive-date = 2014-03-05
| url-status = dead
}}</ref><ref name="robinson">{{Citation | last=Robinson | first=Sara | title=Toward an Optimal Algorithm for Matrix Multiplication | url=https://archive.siam.org/pdf/news/174.pdf | date=November 2005 | journal=SIAM News | volume=38 | issue=9 | quote=Even if someone manages to prove one of the conjectures—thereby demonstrating that {{math|1=''ω'' = 2}}—the wreath product approach is unlikely to be applicable to the large matrix problems that arise in practice. [...] the input matrices must be astronomically large for the difference in time to be apparent.}}</ref>
 
=== Lower bounds for ω ===
Since any algorithm for multiplying two {{math|''n'' × ''n''}}-matrices has to process all {{math|2''n''<sup>2</sup>}} entries, there is an asymptotic lower bound of {{math|Ω(''n''<sup>2</sup>)}} operations. Raz proved a lower bound of {{math|Ω(''n''<sup>2</sup> log(''n''))}} for bounded coefficient arithmetic circuits over the real or complex numbers.<ref>{{cite journal | last1 = Raz | first1 = Ran | author-link = Ran Raz | year = 2002| title = On the complexity of matrix product | journal = Proceedings of the Thirty-fourth Annual ACM Symposium on Theory of Computing | pages = 144 | doi = 10.1145/509907.509932 | isbn = 1581134959 | s2cid = 9582328 }}</ref>
 
There is a trivial lower bound of {{tmath|\omega \ge 2}}. Since any algorithm for multiplying two {{math|''n'' × ''n''}}-matrices has to process all {{math|2''n''<sup>2</sup>}} entries, there is ana trivial asymptotic lower bound of {{math|Ω(''n''<sup>2</sup>)}} operations for any matrix multiplication algorithm. RazThus proved{{tmath|2\le a\omega < 2.373}}. It is unknown whether {{tmath|\omega > 2}}. The best known lower bound offor matrix-multiplication complexity is {{math|Ω(''n''<sup>2</sup> log(''n''))}}, for bounded coefficient [[Arithmetic circuit complexity|arithmetic circuits]] over the real or complex numbers, and is due to [[Ran Raz]].<ref>{{cite journal | last1 = Raz | first1 = Ran | author-link = Ran Raz | year = 2002| title = On the complexity of matrix product | journal = Proceedings of the Thirty-fourth Annual ACM Symposium on Theory of Computing | pages = 144 | doi = 10.1145/509907.509932 | isbn = 1581134959 | s2cid = 9582328 }}</ref>
=== Group theory reformulation of matrix multiplication algorithms ===
 
=== Rectangular matrix multiplication ===
[[Henry Cohn]], [[Robert Kleinberg]], [[Balázs Szegedy]] and [[Chris Umans]] put methods such as the Strassen and Coppersmith–Winograd algorithms in an entirely different [[group theory|group-theoretic]] context, by utilising triples of subsets of finite groups which satisfy a disjointness property called the [[Triple product property|triple product property (TPP)]]. They also give conjectures that, if true, would imply that there are matrix multiplication algorithms with essentially quadratic complexity. This implies that the optimal exponent of matrix multiplication is 2, which most researchers believe is indeed the case.<ref name="robinson"/> One such conjecture is that families of [[wreath product]]s of [[Abelian group]]s with symmetric groups realise families of subset triples with a simultaneous version of the TPP.<ref>{{Cite book | last1 = Cohn | first1 = H. | last2 = Kleinberg | first2 = R. | last3 = Szegedy | first3 = B. | last4 = Umans | first4 = C. | chapter = Group-theoretic Algorithms for Matrix Multiplication | doi = 10.1109/SFCS.2005.39 | title = 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05) | pages = 379 | year = 2005 | isbn = 0-7695-2468-0 | s2cid = 41278294 | url = https://authors.library.caltech.edu/23966/ }}</ref><ref>Henry Cohn, Chris Umans. A Group-theoretic Approach to Fast Matrix Multiplication. {{arxiv|math.GR/0307321}}. ''Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science'', 11–14 October 2003, Cambridge, MA, IEEE Computer Society, pp.&nbsp;438–449.</ref> Several of their conjectures have since been disproven by Blasiak, Cohn, Church, Grochow, Naslund, Sawin, and Umans using the Slice Rank method.<ref>{{Cite book | last1 = Blasiak | first1 = J. | last2 = Cohn | first2 = H. | last3 = Church | first3 = T. | last4 = Grochow | first4 = J. | last5 = Naslund | first5= E. | last6 = Sawin | first6 = W. | last7=Umans | first7= C.| chapter= On cap sets and the group-theoretic approach to matrix multiplication | doi = 10.19086/da.1245 | title = Discrete Analysis | year = 2017 | s2cid = 9687868 | url = http://discreteanalysisjournal.com/article/1245-on-cap-sets-and-the-group-theoretic-approach-to-matrix-multiplication}}</ref> Further, Alon, Shpilka and [[Chris Umans]] have recently shown that some of these conjectures implying fast matrix multiplication are incompatible with another plausible conjecture, the [[sunflower conjecture]].<ref>[[Noga Alon|Alon]], Shpilka, Umans, [http://eccc.hpi-web.de/report/2011/067/ On Sunflowers and Matrix Multiplication]</ref>
 
<ref>{{Citation|last=Gall|first=Francois Le|title=Improved Rectangular Matrix Multiplication using Powers of the Coppersmith-Winograd Tensor|date=2018-01-01|url=https://epubs.siam.org/doi/10.1137/1.9781611975031.67|work=Proceedings of the 2018 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA)|pages=1029–1046|series=Proceedings|publisher=Society for Industrial and Applied Mathematics|doi=10.1137/1.9781611975031.67|access-date=2021-05-23|last2=Urrutia|first2=Florent|arxiv=1708.05622}}</ref>
 
==Related complexities==
{{further|Computational complexity of mathematical operations#Matrix algebra}}
The importance of the computational complexity of matrix multiplication relies on the facts that many algorithmic problems may be solved by means of matrix computation, and most problems on matrices have a complexity which is either the same as that of matrix multiplication (up to a multiplicative constant), or may be expressed in term of the complexity of matrix multiplication or its exponent <math>\omega.</math>
 
There are several advantages of expressing complexities in terms of the exponent <math>\omega</math> of matrix multiplication. Firstly, if <math>\omega</math> is improved, this will automatically improve the known upper bound of complexity of many algorithms. Secondly, in practical implementations, one never uses the matrix multiplication algorithm that has the best asymptotical complexity, because the constant hidden behind the [[big O notation]] is too large for making the algorithm competitive for sizes of matrices that can be manipulated in a computer.{{citation needed|date=February 2020}} Thus expressing complexities in terms of <math>\omega</math> provide a more realistic complexity, since it remains valid whichever algorithm is chosen for matrix computation.
 
Problems that have the same asymptotic complexity as matrix multiplication include [[determinant]], [[matrix inversion]], [[Gaussian elimination]] (see next section). Problems with complexity that is expressible in terms of <math>\omega</math> include characteristic polynomial, eigenvalues (but not eigenvectors), [[Hermite normal form]], and [[Smith normal form]].{{citation needed|date=March 2018}}
Line 276 ⟶ 290:
* [[Computational complexity of mathematical operations]]
* [[CYK algorithm#Valiant's algorithm|CYK algorithm, §Valiant's algorithm]]
* [[Freivalds' algorithm]] is, a simple [[Monte Carlo algorithm]] that, given matrices {{mvar|A}}, {{mvar|B}} and {{mvar|C}}, verifies in {{math|Θ(''n''<sup>2</sup>)}} time if {{math|''AB'' {{=}} ''C''}}.
* [[Matrix chain multiplication]]
* [[Matrix multiplication]]
* [[Matrix multiplication algorithm]]
* [[Sparse matrix-vector multiplication]]