Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient

@article{Notarnicola2017AsynchronousDO,
  title={Asynchronous Distributed Optimization Via Randomized Dual Proximal Gradient},
  author={Ivano Notarnicola and Giuseppe Notarstefano},
  journal={IEEE Transactions on Automatic Control},
  year={2017},
  volume={62},
  pages={2095-2106}
}
In this paper we consider distributed optimization problems in which the cost function is separable, i.e., a sum of possibly non-smooth functions all sharing a common variable, and can be split into a strongly convex term and a convex one. The second term is typically used to encode constraints or to regularize the solution. We propose a class of distributed optimization algorithms based on proximal gradient methods applied to the dual problem. We show that, by choosing suitable primal variable… Expand
Distributed Optimization with Coupling Constraints via Dual Proximal Gradient Method and Applications to Asynchronous Networks
In this paper, we consider solving a distributed optimization problem (DOP) with coupling constraints in a multiagent network based on proximal gradient method. In this problem, each agent aims toExpand
Distributed constrained convex optimization and consensus via dual decomposition and proximal minimization
TLDR
This work considers a general class of convex optimization problems over time-varying, multi-agent networks, that naturally arise in many application domains like energy systems and wireless networks and proposes a novel distributed algorithm based on a combination of dual decomposition and proximal minimization. Expand
Constraint-Coupled Distributed Optimization: A Relaxation and Duality Approach
TLDR
It is proved that agents asymptotically compute their portion of an optimal (feasible) solution of the original problem and this primal recovery property is obtained without any averaging mechanism typically used in dual decomposition methods. Expand
Towards Totally Asynchronous Primal-Dual Optimization in Blocks
TLDR
A first-order update law is developed and shown to be robust to asynchrony, and convergence rates to a Lagrangian saddle point are derived in terms of the operations agents execute, without specifying any timing or pattern with which they must be executed. Expand
Decentralized Dual Proximal Gradient Algorithms for Non-Smooth Constrained Composite Optimization Problems
TLDR
A synchronous decentralized dual proximal gradient algorithm and its asynchronous version (AsynDe-DuPro) based on the randomized block-coordinate method that attains the globally optimal solution without requiring all agents to be activated at each iteration and thus is more robust than most existing synchronous algorithms. Expand
Composite Optimization with Coupling Constraints via Proximal Gradient Method in Partially Asynchronous Networks
In this paper, we consider a composite optimization problem with linear coupling constraints in a multi-agent network. In this problem, all the agents jointly optimize a global composite costExpand
A distributed asynchronous method of multipliers for constrained nonconvex optimization
TLDR
It is shown that the resulting distributed algorithm is equivalent to a block coordinate descent for the minimization of the global augmented Lagrangian and allows one to extend the properties of the centralized method of multipliers to the considered distributed framework. Expand
Distributed Constrained Nonconvex Optimization: the Asynchronous Method of Multipliers
TLDR
A fully asynchronous and distributed approach for tackling optimization problems in which both the objective function and the constraints may be nonconvex, and it is shown that the resulting distributed algorithm is equivalent to a block coordinate descent for the minimization of the global non-separable augmented Lagrangian. Expand
Distributed Discrete-time Optimization with Coupling Constraints Based on Dual Proximal Gradient Method in Multi-agent Networks
In this paper, we aim to solve a distributed optimization problem with coupling constraints based on proximal gradient method in a multi-agent network, where the cost function of the agents isExpand
Asynchronous Subgradient-push Algorithm for Distributed optimization over Directed Graphs
TLDR
This paper proposes an asynchronous subgradient-push algorithm (AsySPA) to solve an additive cost optimization problem over digraphs where each node only has access to a local convex objective function and designs a novel mechanism to adaptively adjust stepsizes per update in each node and construct a delay-free augmented system. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 49 REFERENCES
Randomized dual proximal gradient for large-scale distributed optimization
TLDR
This paper proposes an asynchronous, distributed optimization algorithm over an undirected topology, based on a proximal gradient update on the dual problem, and shows that by means of a proper choice of primal variables, theDual problem is separable and the dual variables can be stacked into separate blocks. Expand
A fast distributed proximal-gradient method
  • Annie I. Chen, A. Ozdaglar
  • Computer Science, Mathematics
  • 2012 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2012
TLDR
This paper shows that this distributed proximal-gradient method for optimizing the average of convex functions, each of which is the private local objective of an agent in a network with time-varying topology converges at the rate 1/k, which is faster than the convergence rate of the existing distributed methods for solving this problem. Expand
A Polyhedral Approximation Framework for Convex and Robust Distributed Optimization
TLDR
This paper proposes a novel fully distributed and asynchronous algorithm, named cutting-plane consensus, to solve a wide class of convex and robust distributed optimization problems in peer-to-peer networks, based on a polyhedral outer approximation of the constraint sets. Expand
Random Coordinate Descent Algorithms for Multi-Agent Convex Optimization Over Networks
  • I. Necoara
  • Mathematics, Computer Science
  • IEEE Transactions on Automatic Control
  • 2013
In this paper, we develop randomized block-coordinate descent methods for minimizing multi-agent convex optimization problems with linearly coupled constraints over networks and prove that theyExpand
Parallel coordinate descent methods for big data optimization
In this work we show that randomized (block) coordinate descent methods can be accelerated by parallelization when applied to the problem of minimizing the sum of a partially separable smooth convexExpand
Distributed convex optimization via continuous-time coordination algorithms with discrete-time communication
TLDR
The exponential convergence of the proposed algorithm under strongly connected and weight-balanced digraph topologies when the local costs are strongly convex with globally Lipschitz gradients is established, and an upper bound on the stepsize is provided that guarantees exponential convergence over connected graphs for implementations with periodic communication. Expand
Convergence Rates of Distributed Nesterov-Like Gradient Methods on Random Networks
TLDR
Two distributed Nesterov-like gradient methods that modify the D-NG and D-NC methods that are resilient to link failures; computationally cheap; and improve convergence rates over other gradient methods are proposed. Expand
Parallel coordinate descent methods for composite minimization
In this paper we propose a distributed version of a randomized block-coordinate descent method for minimizing the sum of a partially separable smooth convex function and a fully separable non-smoothExpand
On the O(1=k) convergence of asynchronous distributed alternating Direction Method of Multipliers
  • Ermin Wei, A. Ozdaglar
  • Computer Science, Mathematics
  • 2013 IEEE Global Conference on Signal and Information Processing
  • 2013
TLDR
A novel asynchronous ADMM based distributed method is presented for the general formulation of a network of agents that are cooperatively solving a global optimization problem and it is shown that it converges at the rate O (1=k). Expand
Asynchronous Newton-Raphson Consensus for Distributed Convex Optimization
TLDR
This work analytically prove that under some assumptions an asynchronous distributed optimization technique called Newton-Raphson Consensus shows either local or global convergence properties, and corroborate this result by the means of numerical simulations. Expand
...
1
2
3
4
5
...