# Accelerated Distributed Nesterov Gradient Descent

@inproceedings{Qu2019AcceleratedDN, title={Accelerated Distributed Nesterov Gradient Descent}, author={Guannan Qu and Ning Li}, year={2019} }

This paper considers the distributed optimization problem over a network, where the objective is to optimize a global function formed by a sum of local functions, using only local computation and communication. We develop an Accelerated Distributed Nesterov Gradient Descent (Acc-DNGD) method. When the objective function is convex and $L$
-smooth, we show that it achieves a $O(\frac{1}{t^{1.4-\epsilon}})$ convergence rate for all $\epsilon\in(0,1.4)$
. We also show the convergence rate can be… CONTINUE READING

Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

#### Citations

##### Publications citing this paper.

SHOWING 1-10 OF 36 CITATIONS

## A Sharp Convergence Rate Analysis for Distributed Accelerated Gradient Methods

VIEW 8 EXCERPTS

CITES METHODS, BACKGROUND & RESULTS

HIGHLY INFLUENCED

## Accelerated Primal-Dual Algorithms for Distributed Smooth Convex Optimization over Networks

VIEW 7 EXCERPTS

CITES METHODS, RESULTS & BACKGROUND

HIGHLY INFLUENCED

## A Dual Approach for Optimal Algorithms in Distributed Optimization over Networks

VIEW 4 EXCERPTS

CITES METHODS

HIGHLY INFLUENCED

## Gradient-Consensus Method for Distributed Optimization in Directed Multi-Agent Networks

VIEW 5 EXCERPTS

CITES METHODS, BACKGROUND & RESULTS

HIGHLY INFLUENCED

## Optimal distributed convex optimization on slowly time-varying graphs

VIEW 7 EXCERPTS

CITES BACKGROUND & METHODS

HIGHLY INFLUENCED

## A Unification, Generalization, and Acceleration of Exact Distributed First Order Methods

VIEW 6 EXCERPTS

CITES METHODS & BACKGROUND

HIGHLY INFLUENCED

## D-DistADMM: A O(1/k) Distributed ADMM for Distributed Optimization in Directed Graph Topologies

VIEW 1 EXCERPT

CITES METHODS

## Distributed Adaptive Newton Methods with Globally Superlinear Convergence

VIEW 1 EXCERPT

CITES METHODS

## Revisiting EXTRA for Smooth Distributed Optimization

VIEW 3 EXCERPTS

CITES METHODS

## A Flexible Distributed Optimization Framework for Service of Concurrent Tasks in Processing Networks

VIEW 1 EXCERPT

CITES METHODS

#### References

##### Publications referenced by this paper.

SHOWING 1-10 OF 36 REFERENCES

## Fast Distributed Gradient Methods

VIEW 6 EXCERPTS

HIGHLY INFLUENTIAL

## Introductory Lectures on Convex Optimization - A Basic Course

VIEW 4 EXCERPTS

HIGHLY INFLUENTIAL

## Harnessing smoothness to accelerate distributed optimization

VIEW 5 EXCERPTS

## ADD-OPT: Accelerated Distributed Directed Optimization

VIEW 3 EXCERPTS

## NEXT: In-Network Nonconvex Optimization

VIEW 2 EXCERPTS

## Distributed nonconvex optimization over networks

VIEW 2 EXCERPTS