Corpus ID: 236447744

Verifiable Coded Computing: Towards Fast, Secure and Private Distributed Machine Learning

@article{Tang2021VerifiableCC,
  title={Verifiable Coded Computing: Towards Fast, Secure and Private Distributed Machine Learning},
  author={Tingting Tang and Ramy E. Ali and Hanieh Hashemi and Tynan Gangwani and Amir Salman Avestimehr and Murali Annavaram},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.12958}
}
  • Tingting Tang, Ramy E. Ali, +3 authors M. Annavaram
  • Published 2021
  • Computer Science, Mathematics
  • ArXiv
Stragglers, Byzantine workers, and data privacy are the main bottlenecks in distributed cloud computing. Several prior works proposed coded computing strategies to jointly address all three challenges. They require either a large number of workers, a significant communication cost or a significant computational complexity to tolerate malicious workers. Much of the overhead in prior schemes comes from the fact that they tightly couple coding for all three problems into a single framework. In… Expand

Figures from this paper

List-Decodable Coded Computing: Breaking the Adversarial Toleration Barrier
TLDR
The results show that FLCC outperforms LCC by breaking the barrier on the number of adversaries that can be tolerated, and the corresponding threshold in FLCC is improved by a factor of two compared to that of LCC. Expand
Secure Private and Adaptive Matrix Multiplication Beyond the Singleton Bound
TLDR
A framework for security against malicious adversaries in private matrix-matrix multiplication, called SRPM3, provides a computationally efficient security check that detects malicious workers with high probability and can tolerate the presence of an arbitrary number of malicious workers. Expand
ApproxIFER: A Model-Agnostic Approach to Resilient and Robust Prediction Serving Systems
TLDR
Approximate Coded Inference (ApproxIFER) is proposed, a different approach that does not require training of any parity models, hence it is agnostic to the model hosted by the cloud and can be readily applied to different data domains and model architectures. Expand

References

SHOWING 1-10 OF 33 REFERENCES
List-Decodable Coded Computing: Breaking the Adversarial Toleration Barrier
TLDR
The results show that FLCC outperforms LCC by breaking the barrier on the number of adversaries that can be tolerated, and the corresponding threshold in FLCC is improved by a factor of two compared to that of LCC. Expand
Slalom: Fast, Verifiable and Private Execution of Neural Networks in Trusted Hardware
TLDR
Slalom is proposed, a framework that securely delegates execution of all linear layers in a DNN from a TEE to a faster, yet untrusted, co-located processor, for high performance execution of Deep Neural Networks in TEEs. Expand
A Scalable Approach for Privacy-Preserving Collaborative Machine Learning
TLDR
COPML, a fully-decentralized training framework that achieves scalability and privacy-protection simultaneously, is proposed and strong statistical privacy guarantees against colluding parties (adversaries) with unbounded computational power are provided. Expand
Verifiable local computation on distributed data
TLDR
This paper proposes a multi-server verifiable local computation (VLC) model where the client can privately outsource data blocks m=(m1, ..., mn) to cloud servers and later verify computations on any portion of the outsourced data. Expand
DRACO: Byzantine-resilient Distributed Training via Redundant Gradients
TLDR
DRACO is presented, a scalable framework for robust distributed training that uses ideas from coding theory and comes with problem-independent robustness guarantees, and is shown to be several times, to orders of magnitude faster than median-based approaches. Expand
Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent
TLDR
Krum is proposed, an aggregation rule that satisfies the resilience property of the aggregation rule capturing the basic requirements to guarantee convergence despite f Byzantine workers, which is argued to be the first provably Byzantine-resilient algorithm for distributed SGD. Expand
INTERPOL: Information Theoretically Verifiable Polynomial Evaluation
TLDR
By generalizing INTERPOL to a multiparty setting consisting of a network of n untrusted nodes, where each node is interested in evaluating the same polynomial, it is demonstrated that it can achieve an overall computational complexity comparable to a trusted setup, while guaranteeing information-theoretic verification at each node. Expand
Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication
We consider a large-scale matrix multiplication problem where the computation is carried out using a distributed system with a master node and multiple worker nodes, where each worker can store partsExpand
Speeding Up Distributed Machine Learning Using Codes
TLDR
This paper focuses on two of the most basic building blocks of distributed learning algorithms: matrix multiplication and data shuffling, and uses codes to reduce communication bottlenecks, exploiting the excess in storage. Expand
Collaborative Decoding of Polynomial Codes for Distributed Computation
We show that Polynomial codes (and some related codes) used for distributed matrix multiplication are interleaved Generalized Reed-Solomon codes and hence, can be collaboratively decoded. We considerExpand
...
1
2
3
4
...