Coppersmith–Winograd algorithm: Difference between revisions

Content deleted Content added
Jon Awbrey (talk | contribs)
teaking intro
Line 1:
In the [[mathematics|mathematical]] discipline of [[linear algebra]], the '''Coppersmith–Winograd algorithm''' is the fastest currently known [[algorithm]] for square [[matrix multiplication]]. It can multiply two <math>n \times n</math> matrices in <math>O(n^{2.376}) \!\ </math> time (see [[Big O notation]]). This is an improvement over the trivial <math>O(n^3) \!\ </math> time algorithm and the <math>O(n^{2.807}) \!\ </math> time [[Strassen algorithm]]. It might be possible to improve the exponent further; however, the exponent must be at least 2 (because an <math>n \times n</math> matrix has <math>n^2</math> values, and all of them have to be read at least once to calculate the exact result).

The Coppersmith–Winograd algorithm is frequently used as building block in other algorithms to prove theoretical time bounds, but it appears to be not particularly practical for implementations.
 
A newer approach by [[Henry Cohn]], [[Robert Kleinberg]], [[Balázs Szegedy]] and [[Christopher Umans]] gets the exponent 2.41 via a [[group-theoretic]] approach.