A stochastic coordinate descent primal-dual algorithm and applications

@article{Bianchi2014ASC,
  title={A stochastic coordinate descent primal-dual algorithm and applications},
  author={Pascal Bianchi and Walid Hachem and Franck Iutzeler},
  journal={2014 IEEE International Workshop on Machine Learning for Signal Processing (MLSP)},
  year={2014},
  pages={1-6}
}
  • P. Bianchi, W. Hachem, F. Iutzeler
  • Published 3 July 2014
  • Computer Science
  • 2014 IEEE International Workshop on Machine Learning for Signal Processing (MLSP)
First, we introduce a splitting algorithm to minimize a sum of three convex functions. The algorithm is of primal dual kind and is inspired by recent results of Vũ and Condat. Second, we provide a randomized version of the algorithm based on the idea of coordinate descent. Finally, we address two applications of our method: (i) for stochastic minibatch optimization; and (ii) for distributed optimization. 

Figures from this paper

A stochastic coordinate descent primal-dual algorithm with dynamic stepsize for large-scale composite optimization
In this paper we consider the problem of finding the minimizations of the sum of two convex functions and the composition of another convex function with a continuous linear operator. With the idea
A Coordinate-Descent Primal-Dual Algorithm with Large Step Size and Possibly Nonseparable Functions
TLDR
This paper introduces a randomized coordinate-descent version of the Vu--Condat algorithm that only a subset of the coordinates of the primal and dual iterates is selected by the algorithm.
A stochastic coordinate descent splitting primal-dual fixed point algorithm and applications to large-scale composite optimization
We consider the problem of finding the minimizations of the sum of two convex functions and the composition of another convex function with a continuous linear operator from the view of fixed point
Using big steps in coordinate descent primal-dual algorithms
TLDR
A coordinate descent primal-dual algorithm which is provably convergent for a wider range of step size values than previous methods, and the application of this method to distributed optimization and large scale support vector machine problems is discussed.
Stochastic inertial primal-dual algorithms
TLDR
Key in the analysis is considering the framework of splitting algorithm for solving a monotone inclusions in suitable product spaces and for a specific choice of preconditioning operators for a variety of special cases of interest.
A stochastic primal-dual algorithm for distributed asynchronous composite optimization
TLDR
This paper combines recent results on primal-dual optimization and coordinate descent to propose an asynchronous distributed algorithm for composite optimization.
A Class of Randomized Primal-Dual Algorithms for Distributed Optimization
TLDR
The proposed approach can be used to develop novel asynchronous distributed primal-dual algorithms in a multi-agent context and may be useful for reducing computational complexity and memory requirements.
A stochastic coordinate descent inertial primal-dual algorithm for large-scale composite optimization
We consider an inertial primal-dual algorithm to compute the minimizations of the sum of two convex functions and the composition of another convex function with a continuous linear operator. With
Stochastic forward-backward and primal-dual approximation algorithms with application to online image restoration
TLDR
A stochastic version of the forward-backward algorithm for minimizing the sum of two convex functions, one of which is not necessarily smooth, which is proposed and established under relatively mild assumptions.
Derivation and Analysis of the Primal-Dual Method of Multipliers Based on Monotone Operator Theory
TLDR
This paper shows how PDMM combines a lifted dual form in conjunction with Peaceman–Rachford splitting to facilitate distributed optimization in undirected networks and demonstrates sufficient conditions for primal convergence for strongly convex differentiable functions with Lipschitz continuous gradients.
...
...

References

SHOWING 1-10 OF 76 REFERENCES
A Class of Randomized Primal-Dual Algorithms for Distributed Optimization
TLDR
The proposed approach can be used to develop novel asynchronous distributed primal-dual algorithms in a multi-agent context and may be useful for reducing computational complexity and memory requirements.
Optimization with First-Order Surrogate Functions
TLDR
A new incremental scheme is introduced that experimentally matches or outperforms state-of-the-art solvers for large-scale optimization problems typically arising in machine learning.
Stochastic Alternating Direction Method of Multipliers
TLDR
This paper establishes the convergence rate of ADMM for convex problems in terms of both the objective value and the feasibility violation, and proposes a stochastic ADMM algorithm for optimization problems with non-smooth composite objective functions.
Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
  • Y. Nesterov
  • Computer Science, Mathematics
    SIAM J. Optim.
  • 2012
TLDR
Surprisingly enough, for certain classes of objective functions, the proposed methods for solving huge-scale optimization problems are better than the standard worst-case bounds for deterministic algorithms.
A Primal–Dual Splitting Method for Convex Optimization Involving Lipschitzian, Proximable and Linear Composite Terms
TLDR
This work brings together and notably extends several classical splitting schemes, like the forward–backward and Douglas–Rachford methods, as well as the recent primal–dual method of Chambolle and Pock designed for problems with linear composite terms.
Accelerated, Parallel, and Proximal Coordinate Descent
TLDR
A new randomized coordinate descent method for minimizing the sum of convex functions each of which depends on a small number of coordinates only, which can be implemented without the need to perform full-dimensional vector operations, which is the major bottleneck of accelerated coordinate descent.
Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
TLDR
This work proposes an incremental majorization-minimization scheme for minimizing a large sum of continuous functions, a problem of utmost importance in machine learning, and presents convergence guarantees for nonconvex and convex optimization when the upper bounds approximate the objective up to a smooth error.
The proximal point algorithm in metric spaces
The proximal point algorithm, which is a well-known tool for finding minima of convex functions, is generalized from the classical Hilbert space framework into a nonlinear setting, namely, geodesic
Asynchronous distributed optimization using a randomized alternating direction method of multipliers
TLDR
A new class of random asynchronous distributed optimization methods that generalize the standard Alternating Direction Method of Multipliers to an asynchronous setting where isolated components of the network are activated in an uncoordinated fashion are introduced.
Convergence Analysis of Primal-Dual Algorithms for a Saddle-Point Problem: From Contraction Perspective
TLDR
This paper shows that these new primal-dual methods proposed for solving a saddle-point problem are of the contraction type: the iterative sequences generated by these new methods are contractive with respect to the solution set of the saddle- point problem and the global convergence can be obtained within the analytic framework of contraction-type methods.
...
...