The Proximal Augmented Lagrangian Method for Nonsmooth Composite Optimization
@article{Dhingra2019ThePA, title={The Proximal Augmented Lagrangian Method for Nonsmooth Composite Optimization}, author={Neil K. Dhingra and Sei Zhen Khong and Mihailo R. Jovanovi{\'c}}, journal={IEEE Transactions on Automatic Control}, year={2019}, volume={64}, pages={2861-2868} }
We study a class of optimization problems in which the objective function is given by the sum of a differentiable but possibly nonconvex component and a nondifferentiable convex regularization term. We introduce an auxiliary variable to separate the objective function components and utilize the Moreau envelope of the regularization term to derive the proximal augmented Lagrangian—a continuously differentiable function obtained by constraining the augmented Lagrangian to the manifold that…
74 Citations
A second order primal-dual method for nonsmooth convex composite optimization
- Mathematics, Computer ScienceIEEE Transactions on Automatic Control
- 2021
A globally convergent customized algorithm that utilizes the primal-dual augmented Lagrangian as a merit function and shows that the search direction can be computed efficiently and prove quadratic/superlinear asymptotic convergence.
A second order primal-dual algorithm for nonsmooth convex composite optimization
- Mathematics, Computer Science2017 IEEE 56th Annual Conference on Decision and Control (CDC)
- 2017
We develop a second order primal-dual algorithm for nonsmooth convex composite optimization problems in which the objective function is given by the sum of a twice dif-ferentiable term and a possibly…
Distributed proximal augmented Lagrangian method for nonsmooth composite optimization
- Mathematics, Computer Science2018 Annual American Control Conference (ACC)
- 2018
The Moreau envelope associated with the nonsmooth part of the objective function is used to bring the optimization problem into a continuously differentiable form that serves as a basis for the development of a primal-descent dual-ascent gradient flow method.
An Exponentially Convergent Primal-Dual Algorithm for Nonsmooth Composite Minimization
- Mathematics, Computer Science2018 IEEE Conference on Decision and Control (CDC)
- 2018
This paper proves explicit bounds on the exponential convergence rates of the proposed algorithm with a sufficiently small step size and develops a linear matrix inequality (LMI) condition which can be numerically solved to provide rate certificates with general step size choices.
On the Exponential Convergence Rate of Proximal Gradient Flow Algorithms
- Computer Science, Mathematics2018 IEEE Conference on Decision and Control (CDC)
- 2018
It is proved that global exponential convergence can be achieved even in the absence of strong convexity, and a distributed implementation of the gradient flow dynamics based on the proximal augmented Lagrangian is provided to provide global exponential stability for strongly convex problems.
Global exponential stability of primal-dual gradient flow dynamics based on the proximal augmented Lagrangian: A Lyapunov-based approach
- Mathematics2020 59th IEEE Conference on Decision and Control (CDC)
- 2020
The quadratic Lyapunov function generalizes recent result from strongly convex problems with either affine equality or inequality constraints to a broader class of composite optimization problems with nonsmooth regularizers and it provides a worst-case lower bound of the exponential decay rate.
Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints
- Mathematics, Computer ScienceAutom.
- 2021
A Smooth Double Proximal Primal-Dual Algorithm for a Class of Distributed Nonsmooth Optimization Problems
- MathematicsIEEE Transactions on Automatic Control
- 2020
This technical note studies a class of distributed nonsmooth convex consensus optimization problems and proposes a distributed double proximal primal-dual optimization algorithm, which shows that the proposed algorithm can make the states achieve consensus at the optimal point.
An inexact augmented Lagrangian method for nonsmooth optimization on Riemannian manifold
- Mathematics
- 2019
We consider a nonsmooth optimization problem on Riemannian manifold, whose objective function is the sum of a differentiable component and a nonsmooth convex function. We propose a manifold inexact…
On a primal-dual Newton proximal method for convex quadratic programs
- Computer Science, MathematicsComputational Optimization and Applications
- 2022
QPDO is introduced, a primal-dual method for convex quadratic programs which builds upon and weaves together the proximal point algorithm and a damped semismooth Newton method, and proves to be a simple, robust, and efficient numerical method.
References
SHOWING 1-10 OF 55 REFERENCES
An Exponentially Convergent Primal-Dual Algorithm for Nonsmooth Composite Minimization
- Mathematics, Computer Science2018 IEEE Conference on Decision and Control (CDC)
- 2018
This paper proves explicit bounds on the exponential convergence rates of the proposed algorithm with a sufficiently small step size and develops a linear matrix inequality (LMI) condition which can be numerically solved to provide rate certificates with general step size choices.
Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Mathematics, Computer ScienceMath. Program.
- 2014
A self-contained convergence analysis framework is derived and it is established that each bounded sequence generated by PALM globally converges to a critical point.
Accelerated Proximal Gradient Methods for Nonconvex Programming
- Computer Science, MathematicsNIPS
- 2015
This paper is the first to provide APG-type algorithms for general nonconvex and nonsmooth problems ensuring that every accumulation point is a critical point, and the convergence rates remain O(1/k2) when the problems are convex.
Convergence analysis of alternating direction method of multipliers for a family of nonconvex problems
- Computer Science2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2015
It is shown that in the presence of nonconvex objective function, classical ADMM is able to reach the set of stationary solutions for these problems, if the stepsize is chosen large enough.
Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- Computer Science, MathematicsMath. Oper. Res.
- 1976
The theory of the proximal point algorithm for maximal monotone operators is applied to three algorithms for solving convex programs, one of which has not previously been formulated and is shown to have much the same convergence properties, but with some potential advantages.
Asymptotic convergence of constrained primal-dual dynamics
- MathematicsSyst. Control. Lett.
- 2016
A method of multipliers algorithm for sparsity-promoting optimal control
- Computer Science2016 American Control Conference (ACC)
- 2016
We develop a customized method of multipliers algorithm to efficiently solve a class of regularized optimal control problems. By exploiting the problem structure, we transform the augmented…
The direct extension of ADMM for multi-block convex minimization problems is not necessarily convergent
- MathematicsMath. Program.
- 2016
A sufficient condition is presented to ensure the convergence of the direct extension of ADMM, and an example to show its divergence is given, which is not necessarily convergent.
A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds
- Mathematics
- 1991
The global and local convergence properties of a class of augmented Lagrangian methods for solving nonlinear programming problems are considered. In such methods, simple bound constraints are treated…
Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- Computer ScienceSIAM J. Optim.
- 2016
A new framework to analyze and design iterative optimization algorithms built on the notion of Integral Quadratic Constraints (IQC) from robust control theory is developed, proving new inequalities about convex functions and providing a version of IQC theory adapted for use by optimization researchers.