On the asymptotic complexity of matrix multiplication

@article{Coppersmith1981OnTA,
  title={On the asymptotic complexity of matrix multiplication},
  author={Don Coppersmith and Shmuel Winograd},
  journal={22nd Annual Symposium on Foundations of Computer Science (sfcs 1981)},
  year={1981},
  pages={82-90}
}
  • D. Coppersmith, S. Winograd
  • Published 28 October 1981
  • Computer Science, Mathematics
  • 22nd Annual Symposium on Foundations of Computer Science (sfcs 1981)
The main results of this paper have the following flavor: given one algorithm for multiplying matrices, there exists another, better, algorithm. A consequence of these results is that ω, the exponent for matrix multiplication, is a limit point, that is, cannot be realized by any single algorithm. We also use these results to construct a new algorithm which shows that ω ≪ 2.495364. 
Fast rectangular matrix multiplications and improving parallel matrix computations
TLDR
The known algorithms for the sequential multiplication of rectangular matrix multiplication are accelerated to yield an improvement of the current recordymptotically bounds on the deterministic arithmetic NC pr~ ceaaor complexit of the four former ones.
A $θ(n^2)$ Time Matrix Multiplication Algorithm
  • Yijie Han
  • Computer Science, Mathematics
    ArXiv
  • 2016
TLDR
It is shown that the 3 multiplications in (a0, a1, ..., a3m−1) T can be converted to 2 ( 2m+5 5 ) multiplications, which gives a θ(n) time algorithm for matrix multiplication.
Fast Parallel Computation of Polynomials Using Few Processes
It is shown that any multivariate polynomial that can be computed sequentially in C steps and has degree d can be computed in parallel in 0((log d) (log C + log d)) steps using only (Cd)0(1)
Fast Parallel Computation of Polynomials Using Few Processors
TLDR
It is shown that any multivariate polynomial of degree d that can be compute sequentially in C steps can be computed in parallel in O(1) using only $(Cd)^{O(1)} processors.
A Dominance-Based Constrained Optimization Evolutionary Algorithm for the 4-th Tensor Power Problem of Matrix Multiplication
TLDR
A dominance-based constrained optimization evolutionary algorithm that can effectively solve the 4-th tensor power problem of matrix multiplication and the feasible solution obtained by this algorithm is better than the current known solution of the problem.
Efficient Parallel Evaluation of Straight-line Code and Arithmetric Circuits
TLDR
A new parallel algorithm is given to evaluate a straight line program over a commutative semi-ring R of degree d and size n in time O(log n( log nd)) using M(n) processors, where M( n) is the number of processors required for multiplying n×n matrices over the semi- ring R in O (log n) time.
...
1
2
3
...

References

SHOWING 1-10 OF 15 REFERENCES
On the Asymptotic Complexity of Matrix Multiplication
TLDR
A consequence of these results is that $\omega $, the exponent for matrix multiplication, is a limit point, that is, it cannot be realized by any single algorithm.
Duality Applied to the Complexity of Matrix Multiplication and Other Bilinear Forms
The paper considers the complexity of bilinear forms in a noncommutative ring. The dual of a computation is defined and applied to matrix multiplication and other bilinear forms. It is shown that t...
Partial and Total Matrix Multiplication
TLDR
By combining Pan’s trilinear technique with a strong version of the compression theorem for the case of several disjoint matrix multiplications it is shown that multiplication of N \times N matrices (over arbitrary fields) is possible in time.
Relations between exact and approximate bilinear algorithms. Applications
TLDR
The relation betweenAPA-algorithms andEC-algorithm of complexity (1+d)t0, an application for problems associated to tensorial powers of a tensor, such as matrix product, is analyzed.
The computational complexity of algebraic and numeric problems
Thank you for downloading the computational complexity of algebraic and numeric problems elsevier computer science library theory of computation series 1. As you may know, people have search numerous
Vermeidung von Divisionen.
The extent to whieh the use of divisions may speed up the evaluation of polynomials is estimated from above. In particular it is shown that for multiplying general matrices the use of divisions does
Gaussian elimination is not optimal
t. Below we will give an algorithm which computes the coefficients of the product of two square matrices A and B of order n from the coefficients of A and B with tess than 4 . 7 n l°g7 arithmetical
Algebra
THIS is a text–book intended primarily for undergraduates. It is designed to give a broad basis of knowledge comprising such theories and theorems in those parts of algebra which are mentioned in the
Pan, New Combinations of Methods for the Accerleration of Matrix Multiplication
  • SUNYA Report
  • 1980
...
1
2
...