Corpus ID: 235490274

Multiplying Matrices Without Multiplying

  title={Multiplying Matrices Without Multiplying},
  author={Davis W. Blalock and John V. Guttag},
Multiplying matrices is among the most fundamental and compute-intensive operations in machine learning. Consequently, there has been significant work on efficiently approximating matrix multiplies. We introduce a learning-based algorithm for this task that greatly outperforms existing methods. Experiments using hundreds of matrices from diverse domains show that it often runs 100× faster than exact matrix products and 10× faster than current approximate methods. In the common case that one… Expand


Bolt: Accelerated Data Mining with Fast Vector Compression
A vector quantization algorithm that can compress vectors over 12x faster than existing techniques while also accelerating approximate vector operations such as distance and dot product computations by up to 10x is introduced. Expand
Improved Approximation Algorithms for Large Matrices via Random Projections
  • Tamás Sarlós
  • Mathematics, Computer Science
  • 2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06)
  • 2006
The key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation. Expand
A practical streaming approximate matrix multiplication algorithm
This work proposes an algorithm that is more accurate, robust to noise, invariant to concept drift in the data, while having almost the same running time as the state-of-the-art algorithm. Expand
Fast Monte Carlo Algorithms for Matrices I: Approximating Matrix Multiplication
A model (the pass-efficient model) is presented in which the efficiency of these and other approximate matrix algorithms may be studied and which is argued is well suited to many applications involving massive data sets. Expand
A sparse Johnson: Lindenstrauss transform
A sparse version of the fundamental tool in dimension reduction -- the Johnson-Lindenstrauss transform is obtained, using hashing and local densification to construct a sparse projection matrix with just ~O(1/ε) non-zero entries per column, and a matching lower bound on the sparsity for a large class of projection matrices is shown. Expand
Scalable and Sustainable Deep Learning via Randomized Hashing
This work presents a novel hashing-based technique to drastically reduce the amount of computation needed to train and test neural networks, and demonstrates the scalability and sustainability (energy efficiency) of the proposed algorithm via rigorous experimental evaluations on several datasets. Expand
Fast Approximate Matrix Multiplication by Solving Linear Systems
The main contribution is to first reduce the matrix multiplication problem to solving a set of linear equations and then use standard techniques to find an approximate solution to that system in $\tilde{O}(n^2)$ time. Expand
Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication
We consider a large-scale matrix multiplication problem where the computation is carried out using a distributed system with a master node and multiple worker nodes, where each worker can store partsExpand
Near Optimal Frequent Directions for Sketching Dense and Sparse Matrices
New space-optimal algorithms with faster running times are provided and it is shown that the running times of these algorithms are near-Optimal unless the state-of-the-art running time of matrix multiplication can be improved significantly. Expand
Frequent Direction Algorithms for Approximate Matrix Multiplication with Applications in CCA
This paper proposes a deterministic algorithm FD-AMM for computing an approximation to the product of two given matrices that has stronger error bound than both random selection and random projection algorithms with respect to the same space complexity. Expand