Computational complexity of matrix multiplication: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Added arxiv. | Use this bot. Report bugs. | Suggested by Dominic3203 | Category:Computer arithmetic algorithms | #UCB_Category 19/20
Citation bot (talk | contribs)
Removed URL that duplicated identifier. | Use this bot. Report bugs. | Suggested by Abductive | Category:Computational complexity theory | #UCB_Category 71/110
 
(8 intermediate revisions by 3 users not shown)
Line 41:
== Simple algorithms ==
 
If ''A'', ''B'' are two {{math|''n'' × ''n''}} matrices over a field, then their product ''AB'' is also an {{math|''n'' × ''n''}} matrix over that field, defined entrywise as
:<math>
(AB)_{ij} = \sum_{k = 1}^n A_{ik} B_{kj}.
Line 63:
=== Strassen's algorithm ===
{{Main|Strassen algorithm}}
Strassen's algorithm improves on naive matrix multiplication through a [[Divide-and-conquer algorithm|divide-and-conquer]] approach. The key observation is that multiplying two {{math|2 × 2}} matrices can be done with only 7seven multiplications, instead of the usual 8eight (at the expense of 11 additional addition and subtraction operations). This means that, treating the input {{math|''n''×''n''}} matrices as [[Block matrix|block]] {{math|2 × 2}} matrices, the task of multiplying two {{math|''n''×''n''}} matrices can be reduced to 7seven subproblems of multiplying two {{math|''n/2''×''n/2''}} matrices. Applying this recursively gives an algorithm needing <math>O( n^{\log_{2}7}) \approx O(n^{2.807})</math> field operations.
 
Unlike algorithms with faster asymptotic complexity, Strassen's algorithm is used in practice. The [[numerical stability]] is reduced compared to the naive algorithm,<ref>{{cite journal | last1=Miller | first1=Webb | title=Computational complexity and numerical stability | citeseerx = 10.1.1.148.9947 | year=1975 | journal=SIAM News | volume=4 | issue=2 | pages=97–107 | doi=10.1137/0204009}}</ref> but it is faster in cases where {{math|''n'' > 100}} or so<ref name="skiena">{{cite book |first=Steven |last=Skiena |date=2012 |author-link=Steven Skiena |title=The Algorithm Design Manual |url=https://archive.org/details/algorithmdesignm00skie_772 |url-access=limited |publisher=Springer |pages=[https://archive.org/details/algorithmdesignm00skie_772/page/n56 45]–46, 401–403 |doi=10.1007/978-1-84800-070-4_4|chapter=Sorting and Searching |isbn=978-1-84800-069-8 }}</ref> and appears in several libraries, such as [[Basic Linear Algebra Subprograms|BLAS]].<ref>{{cite book |last1=Press |first1=William H. |last2=Flannery |first2=Brian P. |last3=Teukolsky |first3=Saul A. |author3-link=Saul Teukolsky |last4=Vetterling |first4=William T. |title=Numerical Recipes: The Art of Scientific Computing |publisher=[[Cambridge University Press]] |edition=3rd |isbn=978-0-521-88068-8 |year=2007 |page=[https://archive.org/details/numericalrecipes00pres_033/page/n131 108]|title-link=Numerical Recipes }}</ref> Fast matrix multiplication algorithms cannot achieve ''component-wise stability'', but some can be shown to exhibit ''norm-wise stability''.<ref name="bdl16">{{cite journal | last1=Ballard | first1=Grey | last2=Benson | first2=Austin R. | last3=Druinsky | first3=Alex | last4=Lipshitz | first4=Benjamin | last5=Schwartz | first5=Oded | title=Improving the numerical stability of fast matrix multiplication | year=2016 | journal=SIAM Journal on Matrix Analysis and Applications | volume=37 | issue=4 | pages=1382–1418 | doi=10.1137/15M1032168 | arxiv=1507.00687| s2cid=2853388 }}</ref> It is very useful for large matrices over exact domains such as [[finite field]]s, where numerical stability is not an issue.
Line 103:
| pages=234&ndash;235
| date=Jun 1979
| url-access=subscription
}}</ref>
|-
| 1981 || 2.522 || [[Arnold Schönhage|Schönhage]]<ref>
Line 338 ⟶ 339:
| volume = 114
| year = 2023}}</ref><ref>{{cite journal |last1=Makarov |first1=O. M. |title=An algorithm for multiplying 3×3 matrices |journal=Zhurnal Vychislitel'noi Matematiki I Matematicheskoi Fiziki |volume=26 |issue=2 |year=1986 |pages=293–294 |url=https://www.mathnet.ru/php/archive.phtml?wshow=paper&jrnid=zvmmf&paperid=4056&option_lang=eng |access-date=5 October 2022}}
:Also in {{cite journal |doi=10.1016/0041-5553(86)90203-X |title=An algorithm for multiplying 3×3 matrices |year=1986 |last1=Makarov |first1=O. M. |journal=USSR Computational Mathematics and Mathematical Physics |volume=26 |pages=179–180 }}</ref> (23 if non-commutative<ref>{{Cite journal |last=Laderman |first=Julian D. |date=1976 |title=A noncommutative algorithm for multiplying 3×3 matrices using 23 multiplications |url=https://www.ams.org/bull/1976-82-01/S0002-9904-1976-13988-2/ |journal=Bulletin of the American Mathematical Society |language=en |volume=82 |issue=1 |pages=126–128 |doi=10.1090/S0002-9904-1976-13988-2 |issn=0002-9904|doi-access=free }}</ref>). The lower bound of multiplications needed is 2''mn''+2''n''−''m''−2 (multiplication of ''n''×''m''- matrices with ''m''×''n''- matrices using the substitution method, ''<math>m''⩾'' \ge n''⩾3 \ge 3</math>), which means n=3 case requires at least 19 multiplications and n=4 at least 34.<ref>{{Cite journal |last=Bläser |first=Markus |date=February 2003 |title=On the complexity of the multiplication of matrices of small formats |journal=Journal of Complexity |language=en |volume=19 |issue=1 |pages=43–60 |doi=10.1016/S0885-064X(02)00007-9|doi-access=free }}</ref> For n=2 optimal 7seven multiplications and 15 additions are minimal, compared to only 4four additions for 8eight multiplications.<ref>{{Cite journal |last=Winograd |first=S. |date=1971-10-01 |title=On multiplication of 2 × 2 matrices |journal=Linear Algebra and Its Applications |language=en |volume=4 |issue=4 |pages=381–388 |doi=10.1016/0024-3795(71)90009-7 |issn=0024-3795|doi-access=free }}</ref><ref>{{Cite book |last=L. |first=Probert, R. |url=http://worldcat.org/oclc/1124200063 |title=On the complexity of matrix multiplication |date=1973 |publisher=University of Waterloo |oclc=1124200063}}</ref>
 
==See also==