• Publications
  • Influence
Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
TLDR
It is argued that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas.
On the Douglas—Rachford splitting method and the proximal point algorithm for maximal monotone operators
TLDR
This paper shows, by means of an operator called asplitting operator, that the Douglas—Rachford splitting method for finding a zero of the sum of two monotone operators is a special case of the proximal point algorithm, which allows the unification and generalization of a variety of convex programming algorithms.
Augmented Lagrangian and Alternating Direction Methods for Convex Optimization: A Tutorial and Some Illustrative Computational Results
TLDR
This chapter, assuming as little prior knowledge of convex analysis as possible, shows that the actual convergence mechanism of the algorithm is quite different, and underscores this observations with some new computational results in which the ADMM is compared to algorithms that do indeed work by approximately minimizing the augmented Lagrangian.
Nonlinear Proximal Point Algorithms Using Bregman Functions, with Applications to Convex Programming
TLDR
Applying this generalization of the proximal point algorithm to convex programming, one obtains the D-function proximal minimization algorithm of Censor and Zenios, and a wide variety of new multiplier methods.
Some Saddle-function splitting methods for convex programming
Consider two variations of the method of multipliers, or classical augmented Lagrangian method for convex programming. The proximal method of multipliers adjoins quadratic primal proximal terms to
Approximate iterations in Bregman-function-based proximal algorithms
TLDR
This paper establishes convergence of generalized Bregman-function-based proximal point algorithms when the iterates are computed only approximately, and the accuracy conditions on the iterate resemble those required for the classical “linear” proximal Point algorithm, but are slightly stronger.
Parallel alternating direction multiplier decomposition of convex programs
TLDR
Convergence results for both the alternating step and epigraphic methods are given, and their performance on random dense separable quadratic programs is compared.
General Projective Splitting Methods for Sums of Maximal Monotone Operators
TLDR
A general projective framework for finding a zero of the sum of $n$ maximal monotone operators over a real Hilbert space is described, which gives rise to a family of splitting methods of unprecedented flexibility.
...
...