Coppersmith–Winograd algorithm: Difference between revisions

Content deleted Content added
mNo edit summary
top: Retarget redirect with dash to preferred target of redirect with hyphen-minus after manual review
Tags: AWB Removed redirect
 
(129 intermediate revisions by 98 users not shown)
Line 1:
#REDIRECT [[Matrix multiplication algorithm]] {{R from merge}}
In the [[mathematics|mathematical]] discipline of [[linear algebra]], the '''Coppersmith–Winograd algorithm''' is the fastest currently known [[algorithm]] for square [[matrix multiplication]]. It can multiply two <math>n \times n</math> matrices in <math>O(n^{2.376}) \!\ </math> time (see [[Big O notation]]). This is an improvement over the trivial <math>O(n^3) \!\ </math> time algorithm and the <math>O(n^{2.807}) \!\ </math> time [[Strassen algorithm]]. It might be possible to improve the exponent further; however, the exponent must be at least 2 (because an <math>n \times n</math> matrix has <math>n^2</math> values, and all of them have to be read at least once to calculate the exact result).
 
The Coppersmith–Winograd algorithm is frequently used as building block in other algorithms to prove theoretical time bounds. However, unlike the Strassen algorithm, it is not used in practice due to huge constants hidden in the [[Big O notation]].
 
[[Henry Cohn]], [[Robert Kleinberg]], [[Balázs Szegedy]] and [[Christopher Umans]] have rederived the Coppersmith–Winograd algorithm using a [[group theory|group-theoretic]] construction. They also show that either of two different conjectures would imply that the exponent of matrix multiplication is 2, as has long been suspected. It has also been conjectured that no fastest algorithm for matrix multiplication exists, in light of the nearly 20 successive improvements leading to the Coppersmith-Winograd algorithm.
 
In the group-theoretic approach outlined by Cohn, Umans, et. al., there exists a concrete way of proving estimates of the exponent <math>\omega</math> of matrix multiplication via a concept known as the simultaneous triple product property (STPP). To be more specific, the STPP describes the property of a finite group simultaneously "realizing" several independent matrix multiplications via a corresponding family of "index triples" of subsets of the group in such a way that the complexity (rank) of these several multiplications does not exceed the complexity (rank) of the algebra. This leads to general estimates for <math>\omega</math> in terms of the the size of the group, the number of STPP triples realized by the group, and the sizes of the components of these triples. The best groups for achieving tight bounds for <math>\omega</math> in this way appear to be wreath products of Abelian with symmetric groups. For such wreath products, the choice of appropriate STPP triples in an Abelian group and permutations in a corresponding symmetric group might yield concrete estimates of <math>\omega</math> close to 2, as described by [[Sandeep Murthy]].
 
==References==
* Sandeep Murthy. The Simultaneous Triple Product Property and Group-theoretic Results for the Exponent of Matrix Multiplication. {{arXiv|archive=cs.CS|id=0703145}}. 3 April 2007.
* Henry Cohn, Robert Kleinberg, Balazs Szegedy, and Chris Umans. Group-theoretic Algorithms for Matrix Multiplication. {{arXiv|archive=math.GR|id=0511460}}. ''Proceedings of the 46th Annual Symposium on Foundations of Computer Science'', 23-25 October 2005, Pittsburgh, PA, IEEE Computer Society, pp. 379&ndash;388.
* [[Don Coppersmith]] and [[Shmuel Winograd]]. Matrix multiplication via arithmetic progressions. ''Journal of Symbolic Computation'', 9:251&ndash;280, 1990.
 
[[Category:Numerical linear algebra]]
[[Category:Matrix theory]]
[[Category:Algorithms]]
 
[[fr:Algorithme de Coppersmith-Winograd]]