# Monotone Operators and the Proximal Point Algorithm

@article{Rockafellar1976MonotoneOA,
title={Monotone Operators and the Proximal Point Algorithm},
author={R. Tyrrell Rockafellar},
journal={Siam Journal on Control and Optimization},
year={1976},
volume={14},
pages={877-898}
}
• R. Rockafellar
• Published 1 August 1976
• Mathematics
• Siam Journal on Control and Optimization
For the problem of minimizing a lower semicontinuous proper convex function f on a Hilbert space, the proximal point algorithm in exact form generates a sequence $\{ z^k \}$ by taking $z^{k + 1}$ to be the minimizes of $f(z) + ({1 / {2c_k }})\| {z - z^k } \|^2$, where $c_k > 0$. This algorithm is of interest for several reasons, but especially because of its role in certain computational methods based on duality, such as the Hestenes-Powell method of multipliers in nonlinear programming. It…
3,719 Citations
• O. Güler
• Mathematics, Computer Science
SIAM J. Optim.
• 1992
Two new proximal point algorithms for minimizing a proper, lower-semicontinuous convex function f, which converges even if f has no minimizers or is unbounded from below, are introduced.
The asymptotic convergence of the proximal point algorithm (PPA), for the solution of equations of type $0 \in Tz$, where T is a multivalued maximal monotone operator in a real Hilbert space, is
• Mathematics
• 1991
A perturbed parallel decomposition method for solving the following model problem is presented: minimize $f_0 (x) + \sum_{i = 1}^m {f_i (x)}$ over all x in $\mathbb{R}^n$, where $f_0$ is
• Mathematics
• 2015
In a Hilbert setting, we introduce a new dynamical system and associated algorithms for solving monotone inclusions by rapid methods. Given a maximal monotone operator $A$, the evolution is governed
In this article, we give three iterative methods for approximation of fixed points of nonexpansive mappings in aHilbert space. Then we discuss weak and strong convergence theorems for nonlinear
• Mathematics
• 1998
AbstractIn this paper, we analyze a class of methods for minimizing a proper lower semicontinuous extended-valued convex function $$f:\Re^{\mathfrak{n}} \to \Re \cup {\infty}$$ . Instead of the
• Mathematics
Math. Program.
• 2002
It is proved that under an appropriate choice of the sequences λk, εk and with some control on the residual νk, for every rk→0+ the sequence uk converges towards an optimal point u∞ of the linear program.
• Mathematics
• 2003
AbstractThe problem concerned in this paper is the set-valued equation 0 ∈ T(z) where T is a maximal monotone operator. For given xk and βk >: 0, some existing approximate proximal point algorithms
The weighted averaged sequence of iterates is shown to converge weakly to a zero of the Aumann expectation \${\mathbb E}(A(\xi_1,\,.\,)) under the assumption that the latter is maximal.
• Mathematics, Computer Science
J. Glob. Optim.
• 2007
Under the assumption that the global minimizer set is nonempty and bounded, it is proved the full convergence of the sequence generated by the algorithms will converge to a solution of the problem if the proximal parameters approach to zero.

## References

SHOWING 1-10 OF 31 REFERENCES

is called the effective domain of F, and F is said to be locally bounded at a point x e D(T) if there exists a neighborhood U of x such that the set (1.4) T(U) = (J{T(u)\ueU} is a bounded subset of
A finite-valued convex function on a nonempty convex set C in F can always be extended to a proper convex function on F by assigning it the value + 0o outside of C. Let F and G be real vector spaces
The theory of the proximal point algorithm for maximal monotone operators is applied to three algorithms for solving convex programs, one of which has not previously been formulated and is shown to have much the same convergence properties, but with some potential advantages.
For nonlinear programming problems with equality constraints, Hestenes and Powell have independently proposed a dual method of solution in which squares of the constraint functions are added as
The main purpose of this paper is to suggest a method for finding the minimum of a functionf(x) subject to the constraintg(x)=0, which consists of replacingf byF=f+λg+1/2cg2, and computing the appropriate value of the Lagrange multiplier.
This paper identifies necessary and sufficient conditions for a penalty method to yield an optimal solution or a Lagrange multiplier of a convex programming problem by means of a single unconstrained
• Mathematics
• 1975
An example is given of a contractionT defined on a bounded closed convex subset of Hilbert space for which ((I+T)/2)n does not converge.
© Bulletin de la S. M. F., 1965, tous droits réservés. L’accès aux archives de la revue « Bulletin de la S. M. F. » ( http://smf. emath.fr/Publications/Bulletin/Presentation.html), implique l’accord