#REDIRECT [[Matrix multiplication algorithm]] {{R from merge}}
In the [[mathematics|mathematical]] discipline of [[linear algebra]], the '''Coppersmith–Winograd algorithm''', named after [[Don Coppersmith]] and [[Shmuel Winograd]], is the asymptotically fastest known [[algorithm]] for square [[matrix multiplication]]. It can multiply two <math>n \times n</math> matrices in <math>O(n^{2.3727})</math> time (see [[Big O notation]]). This is an improvement over the trivial <math>O(n^3)</math> time algorithm and the <math>O(n^{2.807})</math> time [[Strassen algorithm]]. It is possible to improve the exponent further; however, the exponent must be at least 2 (because an <math>n \times n</math> matrix has <math>n^2</math> values, and all of them have to be read at least once to calculate the exact result). It was known that the complexity of this algorithm is <math>O(n^{2.3755})</math>.<ref>In the Coppersmith and Winograd's original paper</ref> In 2010, however, Stothers gave a tighter analysis of the algorithm, <math>O(n^{2.3737})</math>.<ref>{{Citation | last1=Stothers | first1=Andrew | title=On the Complexity of Matrix Multiplication | url=http://www.maths.ed.ac.uk/pg/thesis/stothers.pdf | year=2010}}.</ref> Williams improved the bound to <math>O(n^{2.3727})</math>.<ref>{{Citation | last1=Williams | first1=Virginia | title=Breaking the Coppersmith-Winograd barrier | url=http://www.cs.berkeley.edu/~virgi/matrixmult.pdf | year=2011}}.</ref>
The Coppersmith–Winograd algorithm is frequently used as a building block in other algorithms to prove theoretical time bounds. However, unlike the Strassen algorithm, it is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware.<ref>{{Citation | last1=Robinson | first1=Sara | title=Toward an Optimal Algorithm for Matrix Multiplication | url=http://www.siam.org/pdf/news/174.pdf | year=2005 | journal=SIAM News | volume=38 | issue=9}}</ref>
[[Henry Cohn]], [[Robert Kleinberg]], [[Balázs Szegedy]] and [[Christopher Umans]] have rederived the Coppersmith–Winograd algorithm using a [[group theory|group-theoretic]] construction. They also show that either of two different conjectures would imply that the optimal exponent of matrix multiplication is 2, as has long been suspected.
<ref>
{{cite doi|10.1109/SFCS.2005.39}}
</ref>
==References==
{{reflist}}
* {{Citation | doi=10.1016/S0747-7171(08)80013-2 | last1=Coppersmith | first1=Don |last2= Winograd | first2=Shmuel | title=Matrix multiplication via arithmetic progressions | url=http://www.cs.umd.edu/~gasarch/ramsey/matrixmult.pdf | year=1990 | journal=Journal of Symbolic Computation| volume=9 | issue=3 | pages=251–280}}.
* {{Citation | last1=William | first1=Virginia | title=Breaking the Coppersmith-Winograd barrier | url=http://www.cs.berkeley.edu/~virgi/matrixmult.pdf | year=2011}}.
{{Numerical linear algebra}}
{{Use dmy dates|date=September 2010}}
{{DEFAULTSORT:Coppersmith–Winograd Algorithm}}
[[Category:Numerical linear algebra]]
[[Category:Matrix theory]]
[[cs:Coppersmithův-Winogradův algoritmus]]
[[eo:Algoritmo de Coppersmith-Winograd]]
[[fr:Algorithme de Coppersmith-Winograd]]
[[pt:Algoritmo de Coppersmith-Winograd]]
[[ru:Алгоритм Копперсмита — Винограда]]
|