On the Douglas—Rachford splitting method and the proximal point algorithm for maximal monotone operators

@article{Eckstein1992OnTD,
  title={On the Douglas—Rachford splitting method and the proximal point algorithm for maximal monotone operators},
  author={Jonathan Eckstein and Dimitri P. Bertsekas},
  journal={Mathematical Programming},
  year={1992},
  volume={55},
  pages={293-318}
}
This paper shows, by means of an operator called asplitting operator, that the Douglas—Rachford splitting method for finding a zero of the sum of two monotone operators is a special case of the proximal point algorithm. Therefore, applications of Douglas—Rachford splitting, such as the alternating direction method of multipliers for convex programming decomposition, are also special cases of the proximal point algorithm. This observation allows the unification and generalization of a variety of… 
Adaptive Douglas-Rachford Splitting Algorithm for the Sum of Two Operators
TLDR
This paper proposes an adaptive Douglas-Rachford splitting algorithm for the sum of two operators, one of which is strongly monotone while the other one is weakly monotones, and proves global linear convergence, which sharpens recent known results.
Relatively relaxed proximal point algorithms for generalized maximal monotone mappings and Douglas-Rachford splitting methods
  • R. Verma
  • Mathematics, Computer Science
  • 2011
TLDR
The approximation solvability of a general class of inclusion problems is discussed, while generalizing most of investigations on weak convergence using the proximal point algorithm in a real Hilbert space setting.
ON THE SPLITTING METHODS AND THE PROXIMAL POINT ALGORITHM FOR MAXIMAL MONOTONE OPERATORS
TLDR
It is proved that Peaceman-Rachford splitting is equivalent to applying the generalized proximal point algorithm, and it is given a proof that Douglas-Rachef splitting is an application of the proximalpoint algorithm.
Douglas–Rachford Splitting for the Sum of a Lipschitz Continuous and a Strongly Monotone Operator
TLDR
The case, when one operator is Lipschitz continuous but not necessarily a subdifferential operator and the other operator is strongly monotone, arises in optimization methods based on primal–dual approaches, and new linear convergence results are provided.
On the Douglas–Rachford algorithm
TLDR
This paper provides a complete proof of the full weak convergence in the convex feasibility setting of the Douglas–Rachford algorithm and relies on a new convergence principle for Fejér monotone sequences.
Forward-Douglas–Rachford splitting and forward-partial inverse method for solving monotone inclusions
We provide two weakly convergent algorithms for finding a zero of the sum of a maximally monotone operator, a cocoercive operator, and the normal cone to a closed vector subspace of a real Hilbert
Douglas-Rachford splitting for a Lipschitz continuous and a strongly monotone operator
The Douglas-Rachford method is a popular splitting technique for finding a zero of the sum of two subdifferential operators of proper closed convex functions; more generally two maximally monotone
On Douglas–Rachford operators that fail to be proximal mappings
TLDR
This work considers the class of symmetric linear relations that are maximally monotone and proves the striking result that the Douglas–Rachford operator is generically not a proximal mapping.
The primal douglas-rachford splitting algorithm for a class of monotone mappings with application to the traffic equilibrium problem
  • M. Fukushima
  • Mathematics, Computer Science
    Math. Program.
  • 1996
TLDR
The Douglas-Rachford splitting algorithm is applied to a class of multi-valued equations consisting of the sum of two monotone mappings and shown to derive decomposition algorithms for solving the variational inequality formulation of the traffic equilibrium problem.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 80 REFERENCES
Applications of splitting algorithm to decomposition in convex programming and variational inequalities
Recently Han and Lou proposed a highly parallelizable decomposition algorithm for minimizing a strongly convex cost over the intersection of closed convex sets. It is shown that their algorithm is in
Applications of a Splitting Algorithm to Decomposition in Convex Programming
  • P. Tseng
  • Mathematics, Computer Science
  • 1988
TLDR
This paper shows that Han and Lou's algorithm, as well as the method of multipliers and the dual gradient method, are special cases of a certain multiplier method for separable convex programming, and gives an extension of Gabay's algorithm that allows dynamic stepsizes and shows that it has a linear rate of convergence.
The Lions-Mercier splitting algorithm and the alternating direction method are instances of the proximal point algorithm
Suppose we have two maximal monotone operators A and B over a Hilbert space and wish to find a zero of the operator A+B. We introduce an auxiliary maximal monotone operator SAAB whose set of zeroes
Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
TLDR
The theory of the proximal point algorithm for maximal monotone operators is applied to three algorithms for solving convex programs, one of which has not previously been formulated and is shown to have much the same convergence properties, but with some potential advantages.
Partial inverse of a monotone operator
ForT a maximal monotone operator on a Hilbert spaceH andA a closed subspace ofH, the “partial inverse”TA ofT with respect toA is introduced.TA is maximal monotone. The proximal point algorithm, as it
Monotone Operators and the Proximal Point Algorithm
For the problem of minimizing a lower semicontinuous proper convex function f on a Hilbert space, the proximal point algorithm in exact form generates a sequence $\{ z^k \} $ by taking $z^{k + 1} $
Applications of the method of partial inverses to convex programming: Decomposition
TLDR
A primal–dual decomposition method is presented to solve the separable convex programming problem, equivalent to the proximal point algorithm applied to a certain maximal monotone multifunction.
Modified Lagrangians in Convex Programming and their Generalizations
In this paper a rather general class of modified Lagrangians is described for which the main results of the duality theory hold. Within this class two families of modified Lagrangians are taken into
Non-Linear Monotone Operators in Banach Spaces
The article is a survey of work on non-linear monotone operators on Banach spaces. Let be an operator acting from a Banach space into its adjoint space. If on the whole space the scalar product
...
1
2
3
4
5
...