Linear Convergence of First- and Zeroth-Order Primal–Dual Algorithms for Distributed Nonconvex Optimization
@article{Yi2019LinearCO, title={Linear Convergence of First- and Zeroth-Order Primal–Dual Algorithms for Distributed Nonconvex Optimization}, author={Xinlei Yi and Shengjun Zhang and Tao Yang and Tianyou Chai and Karl Henrik Johansson}, journal={IEEE Transactions on Automatic Control}, year={2019}, volume={67}, pages={4194-4201} }
This article considers the distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of local cost functions by using local information exchange. We first consider a distributed first-order primal–dual algorithm. We show that it converges sublinearly to a stationary point if each local cost function is smooth and linearly to a global optimum under an additional condition that the global cost function satisfies the Polyak–Łojasiewicz condition. This…
14 Citations
A Fast Randomized Incremental Gradient Method for Decentralized Nonconvex Optimization
- Computer ScienceIEEE Transactions on Automatic Control
- 2022
This article shows the almost sure and mean-squared convergence ofGT-SAGA to a first-order stationary point and describes regimes of practical significance, where it outperforms the existing approaches and achieves a network topology-independent iteration complexity, respectively.
On the Benefits of Multiple Gossip Steps in Communication-Constrained Decentralized Optimization
- Computer ScienceArXiv
- 2020
This work shows that having gradient iterations with constant step size enables convergence to within $\epsilon$ of the optimal value for smooth non-convex objectives satisfying Polyak-Łojasiewicz condition, and this result also holds for smooth strongly convex objectives.
Communication Compression for Decentralized Nonconvex Optimization
- Computer Science
- 2022
Three decentralized primal–dual algorithms with compressed communication have comparable convergence properties as state-of-the-art algorithms without communication compression and find global optima with linear convergence rate.
Zeroth-order algorithms for stochastic distributed nonconvex optimization
- Computer Science, MathematicsAutom.
- 2022
Communication Compression for Distributed Nonconvex Optimization
- Computer ScienceIEEE Transactions on Automatic Control
- 2022
This paper considers distributed nonconvex optimization with the cost functions being distributed over agents and shows that the proposed distributed algorithms with compressed communication have comparable convergence properties as state-of-the-art algorithms with exact communication.
Sublinear and Linear Convergence of Modified ADMM for Distributed Nonconvex Optimization
- Computer Science, MathematicsIEEE Transactions on Control of Network Systems
- 2023
This article proposes a distributed linearized ADMM (L-ADMM) algorithm, derived from the modified ADMM algorithm, by linearizing the local cost function at each iteration, and shows that the L-AD MM algorithm has the same convergence properties as the modifiedADMM algorithm under the same conditions.
Fully Distributed Continuous-Time Algorithm for Nonconvex Optimization over Unbalanced Directed Networks
- Mathematics
- 2022
—This paper investigates the distributed continuous- time nonconvex optimization problem over unbalanced directed networks. The objective is to cooperatively drive all the agent states to an optimal…
Rate analysis of dual averaging for nonconvex distributed optimization
- Computer Science, Mathematics
- 2022
This work revisits the distributed dual averaging algorithm, which is known to converge for convex problems, and proves that the squared norm of this suboptimality measure converges at rate O (1 /t).
Distributed Generalized Wirtinger Flow for Interferometric Imaging on Networks
- MathematicsIFAC-PapersOnLine
- 2022
Zeroth-Order Stochastic Coordinate Methods for Decentralized Non-convex Optimization
- Computer Science
- 2022
Compared with the existing methods, the proposed algorithms are verified through two benchmark examples in the literature, namely the blackbox binary classification and the generating adversarial examples from black-box DNNs in order to compare with theexisting state-of-the-art centralized and distributed ZO algorithms.
References
SHOWING 1-10 OF 97 REFERENCES
Exponential Convergence for Distributed Optimization Under the Restricted Secant Inequality Condition
- MathematicsIFAC-PapersOnLine
- 2020
On the Convergence of a Distributed Augmented Lagrangian Method for Nonconvex Optimization
- Computer ScienceIEEE Transactions on Automatic Control
- 2017
This is the first work that shows convergence to local minima specifically for a distributed augmented Lagrangian (AL) method applied to nonconvex optimization problems; distributed AL methods are known to perform very well when used to solve convex problems.
On the Q-Linear Convergence of Distributed Generalized ADMM Under Non-Strongly Convex Function Components
- Mathematics, Computer ScienceIEEE Transactions on Signal and Information Processing over Networks
- 2019
It is shown that generalized distributed alternating direction method of multipliers (ADMM) converges Q-linearly to the solution of the mentioned optimization problem if the overall objective function is strongly convex but the functions known by each agent are allowed to be only convex.
Gradient Primal-Dual Algorithm Converges to Second-Order Stationary Solution for Nonconvex Distributed Optimization Over Networks
- Computer Science, MathematicsICML
- 2018
This work shows that with random initialization of the primal and dual variables, both GPDA and GADMM are able to compute second-order stationary solutions (ss2) with probability one, the first result showing that primal-dual algorithm is capable of finding ss2 when only using first-order information.
Convergence Rate of Distributed Optimization Algorithms Based on Gradient Tracking
- Computer Science, MathematicsArXiv
- 2019
This paper studies the convergence rate of SONATA, the first work proving a convergence rate (in particular, linear rate) for distributed algorithms applicable to such a general class of composite, constrained optimization problems over graphs.
Linear convergence of first order methods for non-strongly convex optimization
- Computer Science, MathematicsMath. Program.
- 2019
This paper derives linear convergence rates of several first order methods for solving smooth non-strongly convex constrained optimization problems, i.e. involving an objective function with a Lipschitz continuous gradient that satisfies some relaxed strong convexity condition.
Zero-Gradient-Sum Algorithms for Distributed Convex Optimization: The Continuous-Time Case
- Computer Science, MathematicsIEEE Transactions on Automatic Control
- 2012
A set of continuous-time distributed algorithms that solve unconstrained, separable, convex optimization problems over undirected networks with fixed topologies, called Zero-Gradient-Sum algorithms as they yield nonlinear networked dynamical systems that evolve invariantly on a zero-gradient-sum manifold and converge asymptotically to the unknown optimizer.
Exponential convergence of distributed primal-dual convex optimization algorithm without strong convexity
- Mathematics, Computer ScienceAutom.
- 2019
Convergence of a Multi-Agent Projected Stochastic Gradient Algorithm for Non-Convex Optimization
- Computer Science, MathematicsIEEE Transactions on Automatic Control
- 2013
It is proved that consensus is asymptotically achieved in the network and that the algorithm converges to the set of Karush-Kuhn-Tucker points and numerical results which sustain the claims are provided.
Linear convergence of gradient and proximal-gradient methods under the Polyak– August 26, 2021 DRAFT 24 Łojasiewicz condition
- Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 2016, pp. 795–811.
- 2016