# Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient

@article{Notarnicola2017AsynchronousDO, title={Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient}, author={Ivano Notarnicola and Giuseppe Notarstefano}, journal={IEEE Transactions on Automatic Control}, year={2017}, volume={62}, pages={2095-2106} }

In this paper we consider distributed optimization problems in which the cost function is separable, i.e., a sum of possibly non-smooth functions all sharing a common variable, and can be split into a strongly convex term and a convex one. The second term is typically used to encode constraints or to regularize the solution. We propose a class of distributed optimization algorithms based on proximal gradient methods applied to the dual problem. We show that, by choosing suitable primal variable…

## 44 Citations

Distributed Optimization with Coupling Constraints via Dual Proximal Gradient Method and Applications to Asynchronous Networks

- 2021

In this paper, we consider solving a distributed optimization problem (DOP) with coupling constraints in a multiagent network based on proximal gradient method. In this problem, each agent aims to…

Distributed constrained convex optimization and consensus via dual decomposition and proximal minimization

- Mathematics, Computer Science2016 IEEE 55th Conference on Decision and Control (CDC)
- 2016

This work considers a general class of convex optimization problems over time-varying, multi-agent networks, that naturally arise in many application domains like energy systems and wireless networks and proposes a novel distributed algorithm based on a combination of dual decomposition and proximal minimization.

Constraint-Coupled Distributed Optimization: A Relaxation and Duality Approach

- Computer Science, MathematicsIEEE Transactions on Control of Network Systems
- 2020

It is proved that agents asymptotically compute their portion of an optimal (feasible) solution of the original problem and this primal recovery property is obtained without any averaging mechanism typically used in dual decomposition methods.

Towards Totally Asynchronous Primal-Dual Optimization in Blocks

- Mathematics, Computer Science
- 2020

A first-order update law is developed and shown to be robust to asynchrony, and convergence rates to a Lagrangian saddle point are derived in terms of the operations agents execute, without specifying any timing or pattern with which they must be executed.

Decentralized Dual Proximal Gradient Algorithms for Non-Smooth Constrained Composite Optimization Problems

- Computer ScienceIEEE Transactions on Parallel and Distributed Systems
- 2021

A synchronous decentralized dual proximal gradient algorithm and its asynchronous version (AsynDe-DuPro) based on the randomized block-coordinate method that attains the globally optimal solution without requiring all agents to be activated at each iteration and thus is more robust than most existing synchronous algorithms.

Composite Optimization with Coupling Constraints via Proximal Gradient Method in Partially Asynchronous Networks

- 2021

In this paper, we consider a composite optimization problem with linear coupling constraints in a multi-agent network. In this problem, all the agents jointly optimize a global composite cost…

A distributed asynchronous method of multipliers for constrained nonconvex optimization

- Computer Science, MathematicsAutom.
- 2019

Distributed Constrained Nonconvex Optimization: the Asynchronous Method of Multipliers

- Computer Science
- 2018

A fully asynchronous and distributed approach for tackling optimization problems in which both the objective function and the constraints may be nonconvex, and it is shown that the resulting distributed algorithm is equivalent to a block coordinate descent for the minimization of the global non-separable augmented Lagrangian.

Distributed Discrete-time Optimization with Coupling Constraints Based on Dual Proximal Gradient Method in Multi-agent Networks

- Mathematics
- 2021

In this paper, we aim to solve a distributed optimization problem with coupling constraints based on proximal gradient method in a multi-agent network, where the cost function of the agents is…

Asynchronous Subgradient-push Algorithm for Distributed optimization over Directed Graphs

- Computer Science2019 Chinese Control Conference (CCC)
- 2019

This paper proposes an asynchronous subgradient-push algorithm (AsySPA) to solve an additive cost optimization problem over digraphs where each node only has access to a local convex objective function and designs a novel mechanism to adaptively adjust stepsizes per update in each node and construct a delay-free augmented system.

## References

SHOWING 1-10 OF 49 REFERENCES

Randomized dual proximal gradient for large-scale distributed optimization

- Computer Science, Mathematics2015 54th IEEE Conference on Decision and Control (CDC)
- 2015

This paper proposes an asynchronous, distributed optimization algorithm over an undirected topology, based on a proximal gradient update on the dual problem, and shows that by means of a proper choice of primal variables, theDual problem is separable and the dual variables can be stacked into separate blocks.

A fast distributed proximal-gradient method

- Computer Science, Mathematics2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
- 2012

This paper shows that this distributed proximal-gradient method for optimizing the average of convex functions, each of which is the private local objective of an agent in a network with time-varying topology converges at the rate 1/k, which is faster than the convergence rate of the existing distributed methods for solving this problem.

A Polyhedral Approximation Framework for Convex and Robust Distributed Optimization

- Mathematics, Computer ScienceIEEE Transactions on Automatic Control
- 2014

This paper proposes a novel fully distributed and asynchronous algorithm, named cutting-plane consensus, to solve a wide class of convex and robust distributed optimization problems in peer-to-peer networks, based on a polyhedral outer approximation of the constraint sets.

Random Coordinate Descent Algorithms for Multi-Agent Convex Optimization Over Networks

- Mathematics, Computer ScienceIEEE Transactions on Automatic Control
- 2013

In this paper, we develop randomized block-coordinate descent methods for minimizing multi-agent convex optimization problems with linearly coupled constraints over networks and prove that they…

Parallel coordinate descent methods for big data optimization

- Mathematics, Computer ScienceMath. Program.
- 2016

In this work we show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially separable smooth convex…

Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication

- Mathematics, Computer ScienceAutom.
- 2015

Convergence Rates of Distributed Nesterov-Like Gradient Methods on Random Networks

- Mathematics, Computer ScienceIEEE Transactions on Signal Processing
- 2014

Two distributed Nesterov-like gradient methods that modify the D-NG and D-NC methods that are resilient to link failures; computationally cheap; and improve convergence rates over other gradient methods are proposed.

Parallel coordinate descent methods for composite minimization

- Mathematics
- 2013

In this paper we propose a distributed version of a randomized block-coordinate descent method for minimizing the sum of a partially separable smooth convex function and a fully separable non-smooth…

On the O(1=k) convergence of asynchronous distributed alternating Direction Method of Multipliers

- Computer Science, Mathematics2013 IEEE Global Conference on Signal and Information Processing
- 2013

A novel asynchronous ADMM based distributed method is presented for the general formulation of a network of agents that are cooperatively solving a global optimization problem and it is shown that it converges at the rate O (1=k).

Asynchronous Newton-Raphson Consensus for Distributed Convex Optimization

- Computer Science
- 2012

This work analytically prove that under some assumptions an asynchronous distributed optimization technique called Newton-Raphson Consensus shows either local or global convergence properties, and corroborate this result by the means of numerical simulations.