Accelerated Distributed Nesterov Gradient Descent

@inproceedings{Qu2019AcceleratedDN,
  title={Accelerated Distributed Nesterov Gradient Descent},
  author={Guannan Qu and Ning Li},
  year={2019}
}
  • Guannan Qu, Ning Li
  • Published 2019
  • Mathematics
  • This paper considers the distributed optimization problem over a network, where the objective is to optimize a global function formed by a sum of local functions, using only local computation and communication. We develop an Accelerated Distributed Nesterov Gradient Descent (Acc-DNGD) method. When the objective function is convex and $L$ -smooth, we show that it achieves a $O(\frac{1}{t^{1.4-\epsilon}})$ convergence rate for all $\epsilon\in(0,1.4)$ . We also show the convergence rate can be… CONTINUE READING

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 36 CITATIONS

    A Sharp Convergence Rate Analysis for Distributed Accelerated Gradient Methods

    VIEW 8 EXCERPTS
    CITES METHODS, BACKGROUND & RESULTS
    HIGHLY INFLUENCED

    Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over Networks

    VIEW 7 EXCERPTS
    CITES METHODS, RESULTS & BACKGROUND
    HIGHLY INFLUENCED

    Gradient-Consensus Method for Distributed Optimization in Directed Multi-Agent Networks

    VIEW 5 EXCERPTS
    CITES METHODS, BACKGROUND & RESULTS
    HIGHLY INFLUENCED

    A Unification, Generalization, and Acceleration of Exact Distributed First Order Methods

    VIEW 6 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    Distributed Adaptive Newton Methods with Globally Superlinear Convergence

    VIEW 1 EXCERPT
    CITES METHODS

    Revisiting EXTRA for Smooth Distributed Optimization

    VIEW 3 EXCERPTS
    CITES METHODS

    A Flexible Distributed Optimization Framework for Service of Concurrent Tasks in Processing Networks

    VIEW 1 EXCERPT
    CITES METHODS

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 36 REFERENCES

    Fast Distributed Gradient Methods

    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL

    Introductory Lectures on Convex Optimization - A Basic Course

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Harnessing smoothness to accelerate distributed optimization

    • Guannan Qu, Ning Li
    • Computer Science
    • 2016 IEEE 55th Conference on Decision and Control (CDC)
    • 2016
    VIEW 5 EXCERPTS

    ADD-OPT: Accelerated Distributed Directed Optimization

    VIEW 3 EXCERPTS

    Geometrically convergent distributed optimization with uncoordinated step-sizes

    VIEW 3 EXCERPTS

    NEXT: In-Network Nonconvex Optimization

    VIEW 2 EXCERPTS

    Distributed nonconvex optimization over networks

    • Paolo Di Lorenzo, Gesualdo Scutari
    • Mathematics, Computer Science
    • 2015 IEEE 6th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)
    • 2015
    VIEW 2 EXCERPTS