Monotone Operators and the Proximal Point Algorithm

  title={Monotone Operators and the Proximal Point Algorithm},
  author={R. Tyrrell Rockafellar},
  journal={Siam Journal on Control and Optimization},
  • R. Rockafellar
  • Published 1 August 1976
  • Mathematics
  • Siam Journal on Control and Optimization
For the problem of minimizing a lower semicontinuous proper convex function f on a Hilbert space, the proximal point algorithm in exact form generates a sequence $\{ z^k \} $ by taking $z^{k + 1} $ to be the minimizes of $f(z) + ({1 / {2c_k }})\| {z - z^k } \|^2 $, where $c_k > 0$. This algorithm is of interest for several reasons, but especially because of its role in certain computational methods based on duality, such as the Hestenes-Powell method of multipliers in nonlinear programming. It… 

New Proximal Point Algorithms for Convex Minimization

  • O. Güler
  • Mathematics, Computer Science
    SIAM J. Optim.
  • 1992
Two new proximal point algorithms for minimizing a proper, lower-semicontinuous convex function f, which converges even if f has no minimizers or is unbounded from below, are introduced.

Asymptotic convergence analysis of the proximal point algorithm

The asymptotic convergence of the proximal point algorithm (PPA), for the solution of equations of type $0 \in Tz$, where T is a multivalued maximal monotone operator in a real Hilbert space, is

A perturbed parallel decomposition method for a class of nonsmooth convex minimization problems

A perturbed parallel decomposition method for solving the following model problem is presented: minimize $f_0 (x) + \sum_{i = 1}^m {f_i (x)} $ over all x in $\mathbb{R}^n $, where $f_0 $ is

A dynamic approach to a proximal-Newton method for monotone inclusions in Hilbert spaces, with complexity O(1/n^2)

In a Hilbert setting, we introduce a new dynamical system and associated algorithms for solving monotone inclusions by rapid methods. Given a maximal monotone operator $A$, the evolution is governed


In this article, we give three iterative methods for approximation of fixed points of nonexpansive mappings in aHilbert space. Then we discuss weak and strong convergence theorems for nonlinear

Convergence Analysis of Some Methods for Minimizing a Nonsmooth Convex Function

AbstractIn this paper, we analyze a class of methods for minimizing a proper lower semicontinuous extended-valued convex function $$f:\Re^{\mathfrak{n}} \to \Re \cup {\infty}$$ . Instead of the

Primal and dual convergence of a proximal point exponential penalty method for linear programming

It is proved that under an appropriate choice of the sequences λk, εk and with some control on the residual νk, for every rk→0+ the sequence uk converges towards an optimal point u∞ of the linear program.

A new approximate proximal point algorithm for maximal monotone operator

AbstractThe problem concerned in this paper is the set-valued equation 0 ∈ T(z) where T is a maximal monotone operator. For given xk and βk >: 0, some existing approximate proximal point algorithms

Ergodic Convergence of a Stochastic Proximal Point Algorithm

The weighted averaged sequence of iterates is shown to converge weakly to a zero of the Aumann expectation ${\mathbb E}(A(\xi_1,\,.\,)) under the assumption that the latter is maximal.

Entropy-like proximal algorithms based on a second-order homogeneous distance function for quasi-convex programming

Under the assumption that the global minimizer set is nonempty and bounded, it is proved the full convergence of the sequence generated by the algorithms will converge to a solution of the problem if the proximal parameters approach to zero.



On the maximality of sums of nonlinear monotone operators

is called the effective domain of F, and F is said to be locally bounded at a point x e D(T) if there exists a neighborhood U of x such that the set (1.4) T(U) = (J{T(u)\ueU} is a bounded subset of


A finite-valued convex function on a nonempty convex set C in F can always be extended to a proper convex function on F by assigning it the value + 0o outside of C. Let F and G be real vector spaces

Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming

The theory of the proximal point algorithm for maximal monotone operators is applied to three algorithms for solving convex programs, one of which has not previously been formulated and is shown to have much the same convergence properties, but with some potential advantages.

The multiplier method of Hestenes and Powell applied to convex programming

For nonlinear programming problems with equality constraints, Hestenes and Powell have independently proposed a dual method of solution in which squares of the constraint functions are added as

Multiplier and gradient methods

The main purpose of this paper is to suggest a method for finding the minimum of a functionf(x) subject to the constraintg(x)=0, which consists of replacingf byF=f+λg+1/2cg2, and computing the appropriate value of the Lagrange multiplier.

Necessary and sufficient conditions for a penalty method to be exact

This paper identifies necessary and sufficient conditions for a penalty method to yield an optimal solution or a Lagrange multiplier of a convex programming problem by means of a single unconstrained

An example concerning fixed points

An example is given of a contractionT defined on a bounded closed convex subset of Hilbert space for which ((I+T)/2)n does not converge.

Proximité et dualité dans un espace hilbertien

© Bulletin de la S. M. F., 1965, tous droits réservés. L’accès aux archives de la revue « Bulletin de la S. M. F. » ( http://smf., implique l’accord