Gradient Sparsification for Communication-Efficient Distributed Optimization

@article{Wangni2017GradientSF,
  title={Gradient Sparsification for Communication-Efficient Distributed Optimization},
  author={Jianqiao Wangni and Jialei Wang and Ji Liu and Tong Zhang},
  journal={CoRR},
  year={2017},
  volume={abs/1710.09854}
}
Modern large scale machine learning applications require stochastic optimization algorithms to be implemented on distributed computational architectures. A key bottleneck is the communication overhead for exchanging information such as stochastic gradients among different workers. In this paper, to reduce the communication cost we propose a convex optimization formulation to minimize the coding length of stochastic gradients. To solve the optimal sparsification efficiently, several simple and… CONTINUE READING

References

Publications referenced by this paper.
Showing 1-10 of 35 references

Large-scale machine learning with stochastic gradient descent

  • L. Bottou
  • Proceedings of COMPSTAT’2010, pages 177–186…
  • 2010
Highly Influential
2 Excerpts

Similar Papers

Loading similar papers…