Harnessing smoothness to accelerate distributed optimization

@article{Qu2016HarnessingST,
  title={Harnessing smoothness to accelerate distributed optimization},
  author={Guannan Qu and Na Li},
  journal={2016 IEEE 55th Conference on Decision and Control (CDC)},
  year={2016},
  pages={159-166}
}
There has been a growing effort in studying the distributed optimization problem over a network. The objective is to optimize a global function formed by a sum of local functions, using only local computation and communication. Literature has developed consensus-based distributed (sub)gradient descent (DGD) methods and has shown that they have the same convergence rate O(log t/√t) as the centralized (sub)gradient methods (CGD) when the function is convex but possibly nonsmooth. However, when… CONTINUE READING
Highly Cited
This paper has 59 citations. REVIEW CITATIONS
39 Citations
40 References
Similar Papers

Citations

Publications citing this paper.
Showing 1-10 of 39 extracted citations

59 Citations

02040201620172018
Citations per Year
Semantic Scholar estimates that this publication has 59 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 40 references

Harnessing smoothness to accelerate distributed optimization

  • G. Qu, N. Li
  • arXiv preprint arXiv:1605.07112, 2016.
  • 2016
1 Excerpt

Similar Papers

Loading similar papers…