# Multiplying matrices faster than coppersmith-winograd

@inproceedings{Williams2012MultiplyingMF, title={Multiplying matrices faster than coppersmith-winograd}, author={Virginia Vassilevska Williams}, booktitle={STOC '12}, year={2012} }

We develop an automated approach for designing matrix multiplication algorithms based on constructions similar to the Coppersmith-Winograd construction. Using this approach we obtain a new improved bound on the matrix multiplication exponent ω<2.3727.

## 927 Citations

Matrix Multiplication, a Little Faster

- Physics
- 2020

Strassen’s algorithm (1969) was the first sub-cubic matrix multiplication algorithm. Winograd (1971) improved the leading coefficient of its complexity from 6 to 7. There have been many subsequent ...

Fast Output-Sensitive Matrix Multiplication

- Computer ScienceESA
- 2015

A new randomized algorithm is presented that can use the known fast square matrix multiplication algorithms to perform fewer arithmetic operations than the current state of the art for output matrices that are sparse.

An Adaptable Fast Matrix Multiplication Algorithm, Going Beyond the Myth of Decimal War

- Computer ScienceArXiv
- 2013

In this paper we present an adaptable fast matrix multiplication (AFMM) algorithm, for two nxn dense matrices which computes the product matrix with average complexity Tavg(n) = d1d2n3 with the…

Counting points on curves using a map to P1

- Mathematics, Computer ScienceMath. Comput.
- 2016

A new algorithm to compute the zeta function of a curve over a finite field using a map to the projective line is introduced and all the necessary bounds are developed.

An overview of the recent progress on matrix multiplication

- Mathematics
- 2012

The exponent ω of matrix multiplication is the infimum over all real numbers c such that for all ε > 0 there is an algorithm that multiplies n× n matrices using at most O(n) arithmetic operations…

Counting points on curves using a map to P1, II

- Mathematics, Computer ScienceFinite Fields Their Appl.
- 2017

Simultaneous Conversions with the Residue Number System Using Linear Algebra

- Computer Science, MathematicsACM Trans. Math. Softw.
- 2018

This work provides a highly optimized implementation of the algorithm for simultaneous conversions between a given set of integers and their Residue Number System representations based on linear algebra and significantly improves the overall running time of matrix multiplication.

Matrix Multiplication , a Li le Faster Regular Submission

- Computer Science
- 2017

A generalization of Probert’s lower bound that holds under change of basis is proved, showing that for matrix multiplication algorithms with a 2×2 base case, the leading coecient of the Strassen-Winograd algorithm cannot be further reduced, hence optimal.

Fast Matrix Multiplication: Limitations of the Coppersmith-Winograd Method

- Computer ScienceSTOC
- 2015

A new framework is described extending the original laser method, which is the method underlying the algorithms by Coppersmith and Winograd, Stothers, Vassilevska-Williams and Le Gall, and is the first to explain why taking tensor powers of the Coppermith-Winograd identity results in faster algorithms.

Graph expansion and communication costs of fast matrix multiplication: regular submission

- Computer ScienceSPAA '11
- 2011

The communication cost of algorithms is shown to be closely related to the expansion properties of the corresponding computation graphs, and the first lower bounds on their communication costs are obtained.

## References

SHOWING 1-10 OF 29 REFERENCES

Matrix multiplication via arithmetic progressions

- MathematicsSTOC
- 1987

A new method for accelerating matrix multiplication asymptotically is presented, by using a basic trilinear form which is not a matrix product, and making novel use of the Salem-Spencer Theorem.

Group-theoretic algorithms for matrix multiplication

- Computer Science, Mathematics46th Annual IEEE Symposium on Foundations of Computer Science (FOCS'05)
- 2005

The group-theoretic approach to fast matrix multiplication introduced by Cohn and Umans is developed, and for the first time it is used to derive algorithms asymptotically faster than the standard algorithm.

On the Asymptotic Complexity of Matrix Multiplication

- Computer Science, MathematicsSIAM J. Comput.
- 1982

A consequence of these results is that $\omega $, the exponent for matrix multiplication, is a limit point, that is, it cannot be realized by any single algorithm.

Strassen's algorithm is not optimal trilinear technique of aggregating, uniting and canceling for constructing fast algorithms for matrix operations

- Computer Science19th Annual Symposium on Foundations of Computer Science (sfcs 1978)
- 1978

A new technique of trilinear operations of aggregating, uniting and canceling is introduced and applied to constructing fast linear non-commutative algorithms for matrix multiplication. The result is…

General Context-Free Recognition in Less than Cubic Time

- Computer ScienceJ. Comput. Syst. Sci.
- 1975

Partial and Total Matrix Multiplication

- Mathematics, Computer ScienceSIAM J. Comput.
- 1981

By combining Pan’s trilinear technique with a strong version of the compression theorem for the case of several disjoint matrix multiplications it is shown that multiplication of N \times N matrices (over arbitrary fields) is possible in time.

A group-theoretic approach to fast matrix multiplication

- Mathematics44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings.
- 2003

A new, group-theoretic approach to bounding the exponent of matrix multiplication is developed, including a proof that certain families of groups of order n/sup 2+o(1)/ support n /spl times/ n matrix multiplication.

On Sunflowers and Matrix Multiplication

- MathematicsComputational Complexity Conference
- 2012

It is shown that the Erdos-Rado sunflower conjecture (if true) implies a negative answer to the ``no three disjoint equivoluminous subsets'' question of Coppersmith and Wino grad [CW90] and that the Coppermith-Wino grad conjecture actually implies the Cohn et al. conjecture.

Relative bilinear complexity and matrix multiplication.

- Computer Science
- 1987

The significance of this notion lies, above all, in the key role of matrix multiplication for numerical linear algebra. Thus the following problems all have "exponent' : Matrix inversion,…

Some Properties of Disjoint Sums of Tensors Related to Matrix Multiplication

- MathematicsSIAM J. Comput.
- 1982

Let t be a disjoint sum of tensors associated to matrix multiplication. The rank of the tensorial powers of t is bounded by an expression involving the elements of t and an exponent for matrix…