Coppersmith–Winograd algorithm: Difference between revisions

Content deleted Content added
reference design
Big News: Matrix multiplications with < n*n multiplications
Line 1:
In the [[mathematics|mathematical]] discipline of [[linear algebra]], the '''Coppersmith–Winograd algorithm''', named after [[Don Coppersmith]] and [[Shmuel Winograd]], is the asymptotically fastest known [[algorithm]] for square [[matrix multiplication]] as of 2008. It can multiply two <math>n \times n</math> matrices in <math>O(n^{2.376})</math> time (see [[Big O notation]]). This is an improvement over the trivial <math>O(n^3)</math> time algorithm and the <math>O(n^{2.807})</math> time [[Strassen algorithm]]. It might be possible to improve the exponent further; however, the exponent must be at least 2 (because an <math>n \times n</math> matrix has <math>n^2</math> values, and all of them have to be read at least once to calculate the exact result).
'''Big News: Very recently an algorithm with only O(n^1.401) multiplications was found.'''
'''However only, when the original reading of the matrices and the output are not taken into account.'''
 
The Coppersmith–Winograd algorithm is frequently used as a building block in other algorithms to prove theoretical time bounds. However, unlike the Strassen algorithm, it is not used in practice because it only provides an advantage for matrices so large that they cannot be processed by modern hardware.<ref>{{Citation | last1=Robinson | first1=Sara | title=Toward an Optimal Algorithm for Matrix Multiplication | url=http://www.siam.org/pdf/news/174.pdf | year=2005 | journal=SIAM News | volume=38 | issue=9}}</ref>