Corpus ID: 3098176

A second order primal-dual method for nonsmooth convex composite optimization

@article{Dhingra2017ASO,
  title={A second order primal-dual method for nonsmooth convex composite optimization},
  author={Neil K. Dhingra and Sei Zhen Khong and M. Jovanovi{\'c}},
  journal={ArXiv},
  year={2017},
  volume={abs/1709.01610}
}
We develop a second order primal-dual method for optimization problems in which the objective function is given by the sum of a strongly convex twice differentiable term and a possibly nondifferentiable convex regularizer. After introducing an auxiliary variable, we utilize the proximal operator of the nonsmooth regularizer to transform the associated augmented Lagrangian into a function that is once, but not twice, continuously differentiable. The saddle point of this function corresponds to… Expand
Global exponential stability of primal-dual gradient flow dynamics based on the proximal augmented Lagrangian: A Lyapunov-based approach
TLDR
The quadratic Lyapunov function generalizes recent result from strongly convex problems with either affine equality or inequality constraints to a broader class of composite optimization problems with nonsmooth regularizers and it provides a worst-case lower bound of the exponential decay rate. Expand
On the Exponential Convergence Rate of Proximal Gradient Flow Algorithms
TLDR
It is proved that global exponential convergence can be achieved even in the absence of strong convexity, and a distributed implementation of the gradient flow dynamics based on the proximal augmented Lagrangian is provided to provide global exponential stability for strongly convex problems. Expand
Nesterov Acceleration for Equality-Constrained Convex Optimization via Continuously Differentiable Penalty Functions
TLDR
A framework to use Nesterov’s accelerated gradient method for unconstrained convex optimization and achieve a guaranteed rate of convergence which is better than the state-of-the-art first-order algorithms for constrained conveX optimization. Expand
Global exponential stability of primal-dual gradient flow dynamics based on the proximal augmented Lagrangian
TLDR
A Lyapunov-based approach is used to demonstrate global exponential stability of the underlying dynamics when the differentiable part of the objective function is strongly convex and its gradient is Lipschitz continuous. Expand
Online Optimization as a Feedback Controller: Stability and Tracking
TLDR
A modified algorithm is proposed that can track an approximate solution trajectory of the constrained optimization problem under less restrictive assumptions and under a sufficient time-scale separation between the dynamics of the LTI dynamical system and the algorithm, the LMI conditions can be always satisfied. Expand
Structured covariance completion via proximal algorithms
TLDR
Customized algorithms are developed that allow handling such covariance completion problems for substantially larger scales and utilize the method of multipliers and the proximal augmented Lagrangian method. Expand
Optimal Sensor Selection via Proximal Optimization Algorithms
TLDR
This work proposes a customized proximal gradient method that scales better than standard SDP solvers and investigates alternative second-order extensions using the forward-backward quasi-Newton method for optimal sensor selection in large-scale dynamical systems. Expand
Proximal Algorithms for Large-Scale Statistical Modeling and Sensor/Actuator Selection
TLDR
To address modeling and control of large-scale systems, a unified algorithmic framework using proximal methods is developed that allows handling statistical modeling, as well as sensor and actuator selection, for substantially larger scales than what is amenable to current general-purpose solvers. Expand
Optimization and control of large-scale networked systems
TLDR
This dissertation presents a meta-modelling system that automates the very labor-intensive and therefore time-heavy and therefore expensive and expensive process of systematically cataloging and cataloging individual components of a system. Expand
Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints
TLDR
This paper utilizes the theory of integral quadratic constraints to prove the global exponential stability of the equilibrium points of the differential equations that govern the evolution of proximal gradient and Douglas-Rachford splitting flows and establishes conditions for global exponential convergence even in the absence of strong convexity. Expand
...
1
2
...

References

SHOWING 1-10 OF 72 REFERENCES
The Proximal Augmented Lagrangian Method for Nonsmooth Composite Optimization
TLDR
An algorithm based on the primal-descent dual-ascent gradient method is developed and global (exponential) asymptotic stability when the differentiable component of the objective function is (strongly) convex and the regularization term is convex is proved. Expand
A coordinate gradient descent method for nonsmooth separable minimization
TLDR
A (block) coordinate gradient descent method for solving this class of nonsmooth separable problems and establishes global convergence and, under a local Lipschitzian error bound assumption, linear convergence for this method. Expand
A primal-dual augmented Lagrangian
TLDR
This paper considers the formulation of subproblems in which the objective function is a generalization of the Hestenes-Powell augmented Lagrangian function, and proposes two primal-dual variants of conventional primal methods. Expand
Newton-Type Alternating Minimization Algorithm for Convex Optimization
TLDR
Experiments show that using limited-memory directions in NAMA greatly improves the convergence speed over AMA and its accelerated variant, and the proposed method is well suited for embedded applications and large-scale problems. Expand
Study of a primal-dual algorithm for equality constrained minimization
TLDR
The paper proposes a primal-dual algorithm for solving an equality constrained minimization problem and shows that the usual requirement of solving the penalty problem with a precision of the same size as the perturbation parameter, can be replaced by a much less stringent criterion, while guaranteeing the superlinear convergence property. Expand
A globally and quadratically convergent primal–dual augmented Lagrangian algorithm for equality constrained optimization
TLDR
A Newton-like method applied to a perturbation of the optimality system that follows from a reformulation of the initial problem by introducing an augmented Lagrangian function is presented. Expand
Forward–backward quasi-Newton methods for nonsmooth optimization problems
TLDR
This work proposes an algorithmic scheme that enjoys the same global convergence properties of FBS when the problem is convex, or when the objective function possesses the Kurdyka–Łojasiewicz property at its critical points, and analysis of superlinear convergence is based on an extension of the Dennis and Moré theorem. Expand
An inexact successive quadratic approximation method for L-1 regularized optimization
TLDR
The inexactness conditions are based on a semi-smooth function that represents a (continuous) measure of the optimality conditions of the problem, and that embodies the soft-thresholding iteration. Expand
Forward-backward truncated Newton methods for convex composite optimization
This paper proposes two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form. The algorithms are based on a a reformulation of the original nonsmooth problem as theExpand
...
1
2
3
4
5
...