• Corpus ID: 227239062

# Parity-Checked Strassen Algorithm

@article{Wang2020ParityCheckedSA,
title={Parity-Checked Strassen Algorithm},
author={Hsin-Po Wang and Iwan M. Duursma},
journal={ArXiv},
year={2020},
volume={abs/2011.15082}
}
• Published 30 November 2020
• Computer Science
• ArXiv
To multiply astronomic matrices using parallel workers subject to straggling, we recommend interleaving checksums with some fast matrix multiplication algorithms. Nesting the parity-checked algorithms, we weave a product code flavor protection. Two demonstrative configurations are as follows: (A) $9$ workers multiply two $2\times 2$ matrices; each worker multiplies two linear combinations of entries therein. Then the entry products sent from any $8$ workers suffice to assemble the matrix…

## References

SHOWING 1-10 OF 40 REFERENCES
A Refined Laser Method and Faster Matrix Multiplication
• Computer Science
SODA
• 2021
This paper is a refinement of the laser method that improves the resulting value bound for most sufficiently large tensors, and obtains the best bound on $\omega$ to date.
New ways to multiply 3 x 3-matrices
• Mathematics, Computer Science
J. Symb. Comput.
• 2021
Block-Diagonal and LT Codes for Distributed Computing With Straggling Servers
• Computer Science
IEEE Transactions on Communications
• 2019
Two coded schemes for the distributed computing problem of multiplying a matrix by a set of vectors are proposed and it is shown numerically that the proposed schemes outperform other schemes in the literature, with the LT code-based scheme yielding the best performance for the scenarios considered.
Numerically Stable Polynomially Coded Computing
• Computer Science
2019 IEEE International Symposium on Information Theory (ISIT)
• 2019
It is shown via new theoretical results on the condition number, as well as numerical experiments, that the application of these codes can lead to significantly more numerically stable computation than the current monomial-basis codes.
Polar Coded Distributed Matrix Multiplication
• Computer Science
ArXiv
• 2019
A polar coding mechanism for distributed matrix multiplication, designed specifically for polar codes in erasure channels with real-valued input and outputs, and implemented for a serverless computing platform are proposed.
Polar coded distributed matrix multiplication. CoRR
• 2019
Rateless Codes for Near-Perfect Load Balancing in Distributed Matrix-Vector Multiplication
• Computer Science
Proc. ACM Meas. Anal. Comput. Syst.
• 2019
This paper proposes a rateless fountain coding strategy that achieves the best of both worlds -- it is proved that its latency is asymptotically equal to ideal load balancing, and it performs asymPTotically zero redundant computations.
Straggler Resilient Serverless Computing Based on Polar Codes
• Computer Science
2019 57th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
• 2019
This work designs a sequential decoder specifically for polar codes in erasure channels with full-precision input and outputs and introduces the idea of partial polarization which reduces the computational burden of encoding and decoding at the expense of straggler-resilience.
“Short-Dot”: Computing Large Linear Transforms Distributedly Using Coded Short Dot Products
• Computer Science
IEEE Transactions on Information Theory
• 2019
The key novelty in this work is that in the particular regime where the number of available processing nodes is greater than the total number of dot products, Short-Dot has lower expected computation time under straggling under an exponential model compared to existing strategies.
Coded Sparse Matrix Multiplication
• Computer Science
ICML
• 2018
A new coded computation strategy, calledparse code, is developed, which achieves near the optimal recovery threshold, low computation overhead, and linear decoding time, and is implemented and demonstrated over both uncoded and current fastest coded strategies.