Content deleted Content added
m Open access bot: arxiv added to citation with #oabot. |
|||
Line 224:
|}
The ''matrix multiplication exponent'', usually denoted {{math|ω}}, is the smallest real number for which any <math>n\times n</math> matrix over a field can be multiplied together using <math>n^{\omega + o(1)}</math> field operations. This notation is commonly used in [[algorithm]]s research, so t hat algorithms using matrix multiplication as a subroutine have meaningful bounds on running time regardless of the true value of {{math|ω}}.
Using a naive lower bound and schoolbook matrix multiplication for the upper bound, one can straightforwardly conclude that {{math|2 ≤ ω ≤ 3}}. Whether {{math|1=ω = 2}} is a major open question in [[theoretical computer science]], and there is a line of research developing matrix multiplication algorithms to get improved bounds on {{math|ω}}.
The current best bound on {{math|ω}} is {{math|ω < 2.3728596}}, by Josh Alman and [[Virginia Vassilevska Williams]].<ref name="aw20"/> This algorithm, like all other recent algorithms in this line of research, uses the ''laser method'', a generalization of the Coppersmith–Winograd algorithm, which was given by [[Don Coppersmith]] and [[Shmuel Winograd]] in 1990
=== Group theory reformulation of matrix multiplication algorithms ===
|