Multiplying matrices faster than coppersmith-winograd

  title={Multiplying matrices faster than coppersmith-winograd},
  author={Virginia Vassilevska Williams},
  booktitle={STOC '12},
We develop an automated approach for designing matrix multiplication algorithms based on constructions similar to the Coppersmith-Winograd construction. Using this approach we obtain a new improved bound on the matrix multiplication exponent ω<2.3727. 

Figures from this paper

Matrix Multiplication, a Little Faster
Strassen’s algorithm (1969) was the first sub-cubic matrix multiplication algorithm. Winograd (1971) improved the leading coefficient of its complexity from 6 to 7. There have been many subsequent ...
Fast Output-Sensitive Matrix Multiplication
A new randomized algorithm is presented that can use the known fast square matrix multiplication algorithms to perform fewer arithmetic operations than the current state of the art for output matrices that are sparse.
An Adaptable Fast Matrix Multiplication Algorithm, Going Beyond the Myth of Decimal War
In this paper we present an adaptable fast matrix multiplication (AFMM) algorithm, for two nxn dense matrices which computes the product matrix with average complexity Tavg(n) = d1d2n3 with the
Work-Efficient Matrix Inversion in Polylogarithmic Time
We present an algorithm for inversion of symmetric positive definite matrices that combines the practical requirement of an optimal number of arithmetic operations and the theoretical goal of a
Counting points on curves using a map to P1
  • J. Tuitman
  • Mathematics, Computer Science
    Math. Comput.
  • 2016
A new algorithm to compute the zeta function of a curve over a finite field using a map to the projective line is introduced and all the necessary bounds are developed.
An overview of the recent progress on matrix multiplication
The exponent ω of matrix multiplication is the infimum over all real numbers c such that for all ε > 0 there is an algorithm that multiplies n× n matrices using at most O(n) arithmetic operations
Counting points on curves using a map to P1, II
  • J. Tuitman
  • Mathematics, Computer Science
    Finite Fields Their Appl.
  • 2017
Simultaneous Conversions with the Residue Number System Using Linear Algebra
This work provides a highly optimized implementation of the algorithm for simultaneous conversions between a given set of integers and their Residue Number System representations based on linear algebra and significantly improves the overall running time of matrix multiplication.
Matrix Multiplication , a Li le Faster Regular Submission
A generalization of Probert’s lower bound that holds under change of basis is proved, showing that for matrix multiplication algorithms with a 2×2 base case, the leading coecient of the Strassen-Winograd algorithm cannot be further reduced, hence optimal.
Fast Matrix Multiplication: Limitations of the Coppersmith-Winograd Method
A new framework is described extending the original laser method, which is the method underlying the algorithms by Coppersmith and Winograd, Stothers, Vassilevska-Williams and Le Gall, and is the first to explain why taking tensor powers of the Coppermith-Winograd identity results in faster algorithms.


Matrix multiplication via arithmetic progressions
A new method for accelerating matrix multiplication asymptotically is presented, by using a basic trilinear form which is not a matrix product, and making novel use of the Salem-Spencer Theorem.
Group-theoretic algorithms for matrix multiplication
  • C. Umans
  • Computer Science, Mathematics
    46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05)
  • 2005
The group-theoretic approach to fast matrix multiplication introduced by Cohn and Umans is developed, and for the first time it is used to derive algorithms asymptotically faster than the standard algorithm.
On the Asymptotic Complexity of Matrix Multiplication
A consequence of these results is that $\omega $, the exponent for matrix multiplication, is a limit point, that is, it cannot be realized by any single algorithm.
Strassen's algorithm is not optimal trilinear technique of aggregating, uniting and canceling for constructing fast algorithms for matrix operations
  • V. Pan
  • Computer Science
    19th Annual Symposium on Foundations of Computer Science (sfcs 1978)
  • 1978
A new technique of trilinear operations of aggregating, uniting and canceling is introduced and applied to constructing fast linear non-commutative algorithms for matrix multiplication. The result is
General Context-Free Recognition in Less than Cubic Time
  • L. Valiant
  • Computer Science
    J. Comput. Syst. Sci.
  • 1975
Partial and Total Matrix Multiplication
By combining Pan’s trilinear technique with a strong version of the compression theorem for the case of several disjoint matrix multiplications it is shown that multiplication of N \times N matrices (over arbitrary fields) is possible in time.
A group-theoretic approach to fast matrix multiplication
  • Henry Cohn, C. Umans
  • Mathematics
    44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings.
  • 2003
A new, group-theoretic approach to bounding the exponent of matrix multiplication is developed, including a proof that certain families of groups of order n/sup 2+o(1)/ support n /spl times/ n matrix multiplication.
Relative bilinear complexity and matrix multiplication.
The significance of this notion lies, above all, in the key role of matrix multiplication for numerical linear algebra. Thus the following problems all have "exponent' : Matrix inversion,
Some Properties of Disjoint Sums of Tensors Related to Matrix Multiplication
Let t be a disjoint sum of tensors associated to matrix multiplication. The rank of the tensorial powers of t is bounded by an expression involving the elements of t and an exponent for matrix