# Linear Convergence of First- and Zeroth-Order Primal–Dual Algorithms for Distributed Nonconvex Optimization

@article{Yi2019LinearCO,
title={Linear Convergence of First- and Zeroth-Order Primal–Dual Algorithms for Distributed Nonconvex Optimization},
author={Xinlei Yi and Shengjun Zhang and Tao Yang and Tianyou Chai and Karl Henrik Johansson},
journal={IEEE Transactions on Automatic Control},
year={2019},
volume={67},
pages={4194-4201}
}
• Published 27 December 2019
• Mathematics, Computer Science
• IEEE Transactions on Automatic Control
This article considers the distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of local cost functions by using local information exchange. We first consider a distributed first-order primal–dual algorithm. We show that it converges sublinearly to a stationary point if each local cost function is smooth and linearly to a global optimum under an additional condition that the global cost function satisfies the Polyak–Łojasiewicz condition. This…
14 Citations

## Figures from this paper

• Computer Science
IEEE Transactions on Automatic Control
• 2022
This article shows the almost sure and mean-squared convergence ofGT-SAGA to a first-order stationary point and describes regimes of practical significance, where it outperforms the existing approaches and achieves a network topology-independent iteration complexity, respectively.
• Computer Science
ArXiv
• 2020
This work shows that having gradient iterations with constant step size enables convergence to within $\epsilon$ of the optimal value for smooth non-convex objectives satisfying Polyak-Łojasiewicz condition, and this result also holds for smooth strongly convex objectives.
• Computer Science
• 2022
Three decentralized primal–dual algorithms with compressed communication have comparable convergence properties as state-of-the-art algorithms without communication compression and find global optima with linear convergence rate.
• Computer Science
IEEE Transactions on Automatic Control
• 2022
This paper considers distributed nonconvex optimization with the cost functions being distributed over agents and shows that the proposed distributed algorithms with compressed communication have comparable convergence properties as state-of-the-art algorithms with exact communication.
• Computer Science, Mathematics
IEEE Transactions on Control of Network Systems
• 2023
This article proposes a distributed linearized ADMM (L-ADMM) algorithm, derived from the modified ADMM algorithm, by linearizing the local cost function at each iteration, and shows that the L-AD MM algorithm has the same convergence properties as the modifiedADMM algorithm under the same conditions.
• Mathematics
• 2022
—This paper investigates the distributed continuous- time nonconvex optimization problem over unbalanced directed networks. The objective is to cooperatively drive all the agent states to an optimal
• Computer Science, Mathematics
• 2022
This work revisits the distributed dual averaging algorithm, which is known to converge for convex problems, and proves that the squared norm of this suboptimality measure converges at rate O (1 /t).
• Computer Science
• 2022
Compared with the existing methods, the proposed algorithms are verified through two benchmark examples in the literature, namely the blackbox binary classiﬁcation and the generating adversarial examples from black-box DNNs in order to compare with theexisting state-of-the-art centralized and distributed ZO algorithms.

## References

SHOWING 1-10 OF 97 REFERENCES

• Computer Science
IEEE Transactions on Automatic Control
• 2017
This is the first work that shows convergence to local minima specifically for a distributed augmented Lagrangian (AL) method applied to nonconvex optimization problems; distributed AL methods are known to perform very well when used to solve convex problems.
• Mathematics, Computer Science
IEEE Transactions on Signal and Information Processing over Networks
• 2019
It is shown that generalized distributed alternating direction method of multipliers (ADMM) converges Q-linearly to the solution of the mentioned optimization problem if the overall objective function is strongly convex but the functions known by each agent are allowed to be only convex.
• Computer Science, Mathematics
ICML
• 2018
This work shows that with random initialization of the primal and dual variables, both GPDA and GADMM are able to compute second-order stationary solutions (ss2) with probability one, the first result showing that primal-dual algorithm is capable of finding ss2 when only using first-order information.
• Computer Science, Mathematics
ArXiv
• 2019
This paper studies the convergence rate of SONATA, the first work proving a convergence rate (in particular, linear rate) for distributed algorithms applicable to such a general class of composite, constrained optimization problems over graphs.
• Computer Science, Mathematics
Math. Program.
• 2019
This paper derives linear convergence rates of several first order methods for solving smooth non-strongly convex constrained optimization problems, i.e. involving an objective function with a Lipschitz continuous gradient that satisfies some relaxed strong convexity condition.
• Computer Science, Mathematics
IEEE Transactions on Automatic Control
• 2012
A set of continuous-time distributed algorithms that solve unconstrained, separable, convex optimization problems over undirected networks with fixed topologies, called Zero-Gradient-Sum algorithms as they yield nonlinear networked dynamical systems that evolve invariantly on a zero-gradient-sum manifold and converge asymptotically to the unknown optimizer.
• Computer Science, Mathematics
IEEE Transactions on Automatic Control
• 2013
It is proved that consensus is asymptotically achieved in the network and that the algorithm converges to the set of Karush-Kuhn-Tucker points and numerical results which sustain the claims are provided.

### Linear convergence of gradient and proximal-gradient methods under the Polyak– August 26, 2021 DRAFT 24 Łojasiewicz condition

• Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2016, pp. 795–811.
• 2016