Coppersmith–Winograd algorithm: Difference between revisions

Content deleted Content added
No edit summary
over 20 years; worth pointing out?
Line 1:
In the [[mathematics|mathematical]] discipline of [[linear algebra]], the '''Coppersmith–Winograd algorithm''', named after [[Don Coppersmith]] and [[Shmuel Winograd]], iswas the asymptotically second fastest known [[algorithm]] for square [[matrix multiplication]] from 1990 to 2011, and is the second-fastest {{as of |2011|lc=on}}.<ref>{{Citation | last1=William | first1=Virginia | title=Breaking the Coppersmith-Winograd barrier | url=http://www.cs.berkeley.edu/~virgi/matrixmult.pdf | year=2011}}.</ref> It can multiply two <math>n \times n</math> matrices in <math>O(n^{2.376})</math> time (see [[Big O notation]]). This is an improvement over the trivial <math>O(n^3)</math> time algorithm and the <math>O(n^{2.807})</math> time [[Strassen algorithm]]. It is possible to improve the exponent further; however, the exponent must be at least 2 (because an <math>n \times n</math> matrix has <math>n^2</math> values, and all of them have to be read at least once to calculate the exact result).
 
The Coppersmith–Winograd algorithm is frequently used as a building block in other algorithms to prove theoretical time bounds. However, unlike the Strassen algorithm, it is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware.<ref>{{Citation | last1=Robinson | first1=Sara | title=Toward an Optimal Algorithm for Matrix Multiplication | url=http://www.siam.org/pdf/news/174.pdf | year=2005 | journal=SIAM News | volume=38 | issue=9}}</ref>