The Proximal Augmented Lagrangian Method for Nonsmooth Composite Optimization

@article{Dhingra2019ThePA,
  title={The Proximal Augmented Lagrangian Method for Nonsmooth Composite Optimization},
  author={Neil K. Dhingra and Sei Zhen Khong and Mihailo R. Jovanovi{\'c}},
  journal={IEEE Transactions on Automatic Control},
  year={2019},
  volume={64},
  pages={2861-2868}
}
We study a class of optimization problems in which the objective function is given by the sum of a differentiable but possibly nonconvex component and a nondifferentiable convex regularization term. We introduce an auxiliary variable to separate the objective function components and utilize the Moreau envelope of the regularization term to derive the proximal augmented Lagrangian—a continuously differentiable function obtained by constraining the augmented Lagrangian to the manifold that… 

Figures from this paper

A second order primal-dual method for nonsmooth convex composite optimization
TLDR
A globally convergent customized algorithm that utilizes the primal-dual augmented Lagrangian as a merit function and shows that the search direction can be computed efficiently and prove quadratic/superlinear asymptotic convergence.
A second order primal-dual algorithm for nonsmooth convex composite optimization
We develop a second order primal-dual algorithm for nonsmooth convex composite optimization problems in which the objective function is given by the sum of a twice dif-ferentiable term and a possibly
Distributed proximal augmented Lagrangian method for nonsmooth composite optimization
TLDR
The Moreau envelope associated with the nonsmooth part of the objective function is used to bring the optimization problem into a continuously differentiable form that serves as a basis for the development of a primal-descent dual-ascent gradient flow method.
An Exponentially Convergent Primal-Dual Algorithm for Nonsmooth Composite Minimization
TLDR
This paper proves explicit bounds on the exponential convergence rates of the proposed algorithm with a sufficiently small step size and develops a linear matrix inequality (LMI) condition which can be numerically solved to provide rate certificates with general step size choices.
On the Exponential Convergence Rate of Proximal Gradient Flow Algorithms
TLDR
It is proved that global exponential convergence can be achieved even in the absence of strong convexity, and a distributed implementation of the gradient flow dynamics based on the proximal augmented Lagrangian is provided to provide global exponential stability for strongly convex problems.
Global exponential stability of primal-dual gradient flow dynamics based on the proximal augmented Lagrangian: A Lyapunov-based approach
TLDR
The quadratic Lyapunov function generalizes recent result from strongly convex problems with either affine equality or inequality constraints to a broader class of composite optimization problems with nonsmooth regularizers and it provides a worst-case lower bound of the exponential decay rate.
A Smooth Double Proximal Primal-Dual Algorithm for a Class of Distributed Nonsmooth Optimization Problems
TLDR
This technical note studies a class of distributed nonsmooth convex consensus optimization problems and proposes a distributed double proximal primal-dual optimization algorithm, which shows that the proposed algorithm can make the states achieve consensus at the optimal point.
An inexact augmented Lagrangian method for nonsmooth optimization on Riemannian manifold
We consider a nonsmooth optimization problem on Riemannian manifold, whose objective function is the sum of a differentiable component and a nonsmooth convex function. We propose a manifold inexact
On a primal-dual Newton proximal method for convex quadratic programs
  • Alberto De Marchi
  • Computer Science, Mathematics
    Computational Optimization and Applications
  • 2022
TLDR
QPDO is introduced, a primal-dual method for convex quadratic programs which builds upon and weaves together the proximal point algorithm and a damped semismooth Newton method, and proves to be a simple, robust, and efficient numerical method.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 55 REFERENCES
An Exponentially Convergent Primal-Dual Algorithm for Nonsmooth Composite Minimization
TLDR
This paper proves explicit bounds on the exponential convergence rates of the proposed algorithm with a sufficiently small step size and develops a linear matrix inequality (LMI) condition which can be numerically solved to provide rate certificates with general step size choices.
Proximal alternating linearized minimization for nonconvex and nonsmooth problems
TLDR
A self-contained convergence analysis framework is derived and it is established that each bounded sequence generated by PALM globally converges to a critical point.
Accelerated Proximal Gradient Methods for Nonconvex Programming
TLDR
This paper is the first to provide APG-type algorithms for general nonconvex and nonsmooth problems ensuring that every accumulation point is a critical point, and the convergence rates remain O(1/k2) when the problems are convex.
Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems
TLDR
It is shown that in the presence of nonconvex objective function, classical ADMM is able to reach the set of stationary solutions for these problems, if the stepsize is chosen large enough.
Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
TLDR
The theory of the proximal point algorithm for maximal monotone operators is applied to three algorithms for solving convex programs, one of which has not previously been formulated and is shown to have much the same convergence properties, but with some potential advantages.
A method of multipliers algorithm for sparsity-promoting optimal control
We develop a customized method of multipliers algorithm to efficiently solve a class of regularized optimal control problems. By exploiting the problem structure, we transform the augmented
The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
TLDR
A sufficient condition is presented to ensure the convergence of the direct extension of ADMM, and an example to show its divergence is given, which is not necessarily convergent.
A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds
The global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems are considered. In such methods, simple bound constraints are treated
Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
TLDR
A new framework to analyze and design iterative optimization algorithms built on the notion of Integral Quadratic Constraints (IQC) from robust control theory is developed, proving new inequalities about convex functions and providing a version of IQC theory adapted for use by optimization researchers.
...
1
2
3
4
5
...