Corpus ID: 236171141

Flexible Distributed Matrix Multiplication

  title={Flexible Distributed Matrix Multiplication},
  author={Weiqi Li and Zhen Chen and Zhiying Wang and Syed Ali Jafar and Hamid Jafarkhani},
The distributed matrix multiplication problem with an unknown number of stragglers is considered, where the goal is to efficiently and flexibly obtain the product of two massive matrices by distributing the computation across N servers. There are up to N −R stragglers but the exact number is not known a priori. Motivated by reducing the computation load of each server, a flexible solution is proposed to fully utilize the computation capability of available servers. The computing task for each… Expand

Figures and Tables from this paper


Flexible Constructions for Distributed Matrix Multiplication
Motivated by reducing the latency, a flexible solution is proposed to fully utilize the computation capability of available servers to efficiently and flexibly obtain the product of two massive matrices by distributing the computation across servers. Expand
On the Capacity of Secure Distributed Matrix Multiplication
This paper focuses on information-theoretically secure distributed matrix multiplication with the goal of characterizing the minimum communication overhead and proposes a novel scheme that lower bounds the capacity. Expand
Distributed and Private Coded Matrix Computation with Flexible Communication Load
A novel class of secure codes, referred to as secure generalized PolyDot codes, are introduced that generalizes previously published non-secure versions of these codes for matrix multiplication and extend the state-of-the-art by allowing a flexible trade-off between recovery threshold and communication load for a fixed maximum number of colluding workers. Expand
Adaptive Private Distributed Matrix Multiplication
A rateless private matrix-matrix multiplication scheme, called RPM3, which keeps sending tasks and receiving results until it can decode the multiplication; rendering the scheme flexible and adaptive to heterogeneous environments. Expand
Straggler-Proofing Massive-Scale Distributed Matrix Multiplication with D-Dimensional Product Codes
This work presents a novel coded matrix-matrix multiplication scheme based on d-dimensional product codes that allows for order-optimal computation/communication costs for the encoding/decoding procedures while achieving near-Optimal compute time. Expand
Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication
We consider a large-scale matrix multiplication problem where the computation is carried out using a distributed system with a master node and multiple worker nodes, where each worker can store partsExpand
Codes for Distributed Finite Alphabet Matrix-Vector Multiplication
This paper develops novel code constructions that are applicable to binary matrix- vector multiplication via a variant of the Four-Russians method called the Mailman algorithm, and presents a trade-off between the communication and computation cost of distributed coded matrix-vector multiplication for general, possibly non-binary, matrices. Expand
Matrix sparsification for coded matrix multiplication
This work shows that the Short-Dot scheme is optimal if an Maximum Distance Separable (MDS) matrix is fixed, and proposes a new encoding scheme that can achieve a strictly larger sparsity than the existing schemes. Expand
Straggler Mitigation in Distributed Matrix Multiplication: Fundamental Limits and Optimal Coding
This work proposes a novel coding strategy, named entangled polynomial code, designing intermediate computations at the workers in order to minimize the recovery threshold, and characterize the optimal recovery threshold among all linear coding strategies within a factor of 2 using bilinear complexity. Expand
Hierarchical coded matrix multiplication
This paper decomposes the overall matrix multiplication task into a hierarchy of heterogeneously sized subtasks and exploits the work completed by stragglers, rather than ignoring it, even if that amount is much less than that completed by the fastest workers. Expand