Computational complexity of matrix multiplication: Difference between revisions

Content deleted Content added
Citation bot (talk | contribs)
Alter: title, template type. Add: date, chapter-url, chapter. Removed or converted URL. Removed parameters. Some additions/deletions were parameter name changes. | Use this bot. Report bugs. | Suggested by Headbomb | Linked from Wikipedia:WikiProject_Academic_Journals/Journals_cited_by_Wikipedia/Sandbox3 | #UCB_webform_linked 379/2306
Line 256:
=== Lower bounds for ω ===
 
There is a trivial lower bound of {{tmath|\omega \ge 2}}. Since any algorithm for multiplying two {{math|''n'' × ''n''}}-matrices has to process all {{math|2''n''<sup>2</sup>}} entries, there is a trivial asymptotic lower bound of {{math|Ω(''n''<sup>2</sup>)}} operations for any matrix multiplication algorithm. Thus {{tmath|2\le \omega < 2.37188}}. It is unknown whether {{tmath|\omega > 2}}. The best known lower bound for matrix-multiplication complexity is {{math|Ω(''n''<sup>2</sup> log(''n''))}}, for bounded coefficient [[Arithmetic circuit complexity|arithmetic circuits]] over the real or complex numbers, and is due to [[Ran Raz]].<ref>{{cite book | last1 = Raz | first1 = Ran | title = Proceedings of the thiry-fourth annual ACM symposium on Theory of computing | chapter = On the complexity of matrix product | date = 2002 | author-link = Ran Raz | year = 2002| pages = 144–151 | doi = 10.1145/509907.509932 | isbn = 1581134959 | s2cid = 9582328 }}</ref>
 
The exponent ω is defined to be a [[Accumulation point|limit point]], in that it is the infimum of the exponent over all matrix multiplication algorithm. It is known that this limit point is not achieved. In other words, under the model of computation typically studied, there is no matrix multiplication algorithm that uses precisely {{math|O(''n''<sup>ω</sup>)}} operations; there must be an additional factor of {{math|''n''<sup>o(1)</sup>}}.<ref name="CW.81"/>