Corpus ID: 235490274

# Multiplying Matrices Without Multiplying

@inproceedings{Blalock2021MultiplyingMW,
title={Multiplying Matrices Without Multiplying},
author={Davis W. Blalock and John V. Guttag},
booktitle={ICML},
year={2021}
}
• Published in ICML 2021
• Computer Science, Mathematics
Multiplying matrices is among the most fundamental and compute-intensive operations in machine learning. Consequently, there has been significant work on efficiently approximating matrix multiplies. We introduce a learning-based algorithm for this task that greatly outperforms existing methods. Experiments using hundreds of matrices from diverse domains show that it often runs 100× faster than exact matrix products and 10× faster than current approximate methods. In the common case that one… Expand

#### References

SHOWING 1-10 OF 79 REFERENCES
Bolt: Accelerated Data Mining with Fast Vector Compression
• Computer Science, Mathematics
• KDD
• 2017
A vector quantization algorithm that can compress vectors over 12x faster than existing techniques while also accelerating approximate vector operations such as distance and dot product computations by up to 10x is introduced. Expand
Improved Approximation Algorithms for Large Matrices via Random Projections
• Tamás Sarlós
• Mathematics, Computer Science
• 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06)
• 2006
The key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation. Expand
A practical streaming approximate matrix multiplication algorithm
• Computer Science
• 2018
This work proposes an algorithm that is more accurate, robust to noise, invariant to concept drift in the data, while having almost the same running time as the state-of-the-art algorithm. Expand
Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication
• Computer Science, Mathematics
• SIAM J. Comput.
• 2006
A model (the pass-efficient model) is presented in which the efficiency of these and other approximate matrix algorithms may be studied and which is argued is well suited to many applications involving massive data sets. Expand
A sparse Johnson: Lindenstrauss transform
• Mathematics, Computer Science
• STOC '10
• 2010
A sparse version of the fundamental tool in dimension reduction -- the Johnson-Lindenstrauss transform is obtained, using hashing and local densification to construct a sparse projection matrix with just ~O(1/ε) non-zero entries per column, and a matching lower bound on the sparsity for a large class of projection matrices is shown. Expand
Scalable and Sustainable Deep Learning via Randomized Hashing
• Computer Science, Mathematics
• KDD
• 2017
This work presents a novel hashing-based technique to drastically reduce the amount of computation needed to train and test neural networks, and demonstrates the scalability and sustainability (energy efficiency) of the proposed algorithm via rigorous experimental evaluations on several datasets. Expand
Fast Approximate Matrix Multiplication by Solving Linear Systems
• Computer Science, Mathematics
• Electron. Colloquium Comput. Complex.
• 2014
The main contribution is to first reduce the matrix multiplication problem to solving a set of linear equations and then use standard techniques to find an approximate solution to that system in $\tilde{O}(n^2)$ time. Expand
Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication
• Mathematics, Computer Science
• NIPS
• 2017
We consider a large-scale matrix multiplication problem where the computation is carried out using a distributed system with a master node and multiple worker nodes, where each worker can store partsExpand
Near Optimal Frequent Directions for Sketching Dense and Sparse Matrices
New space-optimal algorithms with faster running times are provided and it is shown that the running times of these algorithms are near-Optimal unless the state-of-the-art running time of matrix multiplication can be improved significantly. Expand
Frequent Direction Algorithms for Approximate Matrix Multiplication with Applications in CCA
• Computer Science
• IJCAI
• 2016
This paper proposes a deterministic algorithm FD-AMM for computing an approximation to the product of two given matrices that has stronger error bound than both random selection and random projection algorithms with respect to the same space complexity. Expand