Near-Optimal Straggler Mitigation for Distributed Gradient Methods

@article{Li2018NearOptimalSM,
  title={Near-Optimal Straggler Mitigation for Distributed Gradient Methods},
  author={Songze Li and Seyed Mohammadreza Mousavi Kalan and Amir Salman Avestimehr and Mahdi Soltanolkotabi},
  journal={2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW)},
  year={2018},
  pages={857-866}
}
Modern learning algorithms use gradient descent updates to train inferential models that best explain data. Scaling these approaches to massive data sizes requires proper distributed gradient descent schemes where distributed worker nodes compute partial gradients based on their partial and local data sets, and send the results to a master node where all the computations are aggregated into a full gradient and the learning model is updated. However, a major performance bottleneck that arises is… CONTINUE READING
Related Discussions
This paper has been referenced on Twitter 4 times. VIEW TWEETS

References

Publications referenced by this paper.
Showing 1-10 of 24 references

Gradient Coding

View 10 Excerpts
Highly Influenced

Improving Distributed Gradient Descent Using Reed-Solomon Codes

2018 IEEE International Symposium on Information Theory (ISIT) • 2018
View 1 Excerpt

Coded TeraSort

2017 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW) • 2017
View 1 Excerpt

Coded computation over heterogeneous clusters

2017 IEEE International Symposium on Information Theory (ISIT) • 2017
View 4 Excerpts

Coding for distributed fog computing

——
IEEE Commun. Mag., vol. 55, no. 4, pp. 34–40, Apr. 2017. • 2017
View 1 Excerpt

On Heterogeneous Coded Distributed Computing

GLOBECOM 2017 - 2017 IEEE Global Communications Conference • 2017
View 1 Excerpt

Similar Papers

Loading similar papers…