A globally convergent proximal Newton-type method in nonsmooth convex optimization

@article{Mordukhovich2022AGC,
  title={A globally convergent proximal Newton-type method in nonsmooth convex optimization},
  author={Boris S. Mordukhovich and Xiaoming Yuan and Shangzhi Zeng and Jin Zhang},
  journal={Mathematical Programming},
  year={2022}
}
The paper proposes and justifies a new algorithm of the proximal Newton type to solve a broad class of nonsmooth composite convex optimization problems without strong convexity assumptions. Based on advanced notions and techniques of variational analysis, we establish implementable results on the global convergence of the proposed algorithm as well as its local convergence with superlinear and quadratic rates. For certain structural problems, the obtained local convergence conditions do not… 
Generalized Damped Newton Algorithms in Nonsmooth Optimization with Applications to Lasso Problems
TLDR
New globally convergent algorithms of the generalized damped Newton type for solving important classes of nonsmooth optimization problems with extended-real-valued cost functions, which typically arise in machine learning and statistics.
ADVANCES IN CONVERGENCE AND SCOPE OF THE PROXIMAL POINT ALGORITHM
The proximal point algorithm, as an approach to finding a zero of a maximal monotone mapping, is well known for its role in numerical optimization, such as in methods of multipliers (ALM). Although
A globally convergent regularized Newton method for $\ell_q$-norm composite optimization problems
This paper is concerned with lq (0<q<1)-norm regularized minimization problems with a twice continuously differentiable loss function. For this class of nonconvex and nonsmooth composite problems,
Minimizing oracle-structured composite functions
TLDR
A method that makes minimal assumptions about the two functions, does not require the tuning of algorithm parameters, and works well in practice across a variety of problems, showing that the method is more efficient than standard solvers when the oracle function contains much data.
Globally Convergent Coderivative-Based Generalized Newton Methods in Nonsmooth Optimization
This paper proposes and justifies two new globally convergent Newton-type methods to solve unconstrained and constrained problems of nonsmooth optimization by using tools of variational analysis and
Generalized Damped Newton Algorithms in Nonsmooth Optimization via Second-Order Subdifferentials
TLDR
New globally convergent algorithms of the generalized damped Newton type for solving important classes of nonsmooth optimization problems with extended-real-valued cost functions, which typically arise in machine learning and statistics.
Estimates of Generalized Hessians for Optimal Value Functions in Mathematical Programming
  • A. Zemkoho
  • Mathematics, Computer Science
    Set-Valued and Variational Analysis
  • 2021
TLDR
The main goal of this paper is to provide estimates of the generalized Hessian for the optimal value function, which could enable the development of robust solution algorithms, such as the Newton method.

References

SHOWING 1-10 OF 50 REFERENCES
Generalized Newton Algorithms for Tilt-Stable Minimizers in Nonsmooth Optimization
TLDR
Two versions of the generalized Newton method are developed to compute not merely arbitrary local minimizers of nonsmooth optimization problems but just those, which possess an important stability property known as tilt stability, which are based on graphical derivatives of the latter.
Proximal quasi-Newton methods for nondifferentiable convex optimization
TLDR
The method monitors the reduction in the value of ∥vk∥ to identify when a line search on f should be used and converges globally and the rate of convergence is Q-linear.
A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo–Tseng error bound property
TLDR
This work proves that when the problem possesses the so-called Luo–Tseng error bound (EB) property, IRPN converges globally to an optimal solution, and the local convergence rate of the sequence of iterates generated by IRPN is linear, superlinear, or even quadratic, depending on the choice of parameters of the algorithm.
On the linear convergence of descent methods for convex essentially smooth minimization
TLDR
The linear convergence of both the gradient projection algorithm of Goldstein and Levitin and Polyak, and a matrix splitting algorithm using regular splitting, is established, which does not require that the cost function be strongly convex or that the optimal solution set be bounded.
Convergence Rates of Inexact Proximal-Gradient Methods for Convex Optimization
TLDR
This work shows that both the basic proximal-gradient method and the accelerated proximal - gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates.
Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
TLDR
This work explains the observed linear convergence intuitively by proving the equivalence of such an error bound to a natural quadratic growth condition and generalizes to linear convergence analysis for proximal methods for minimizing compositions of nonsmooth functions with smooth mappings.
A Generalized Newton Method for Subgradient Systems
This paper proposes and develops a new Newton-type algorithm to solve subdifferential inclusions defined by subgradients of extended-real-valued prox-regular functions. The proposed algorithm is
Linear convergence of first order methods for non-strongly convex optimization
TLDR
This paper derives linear convergence rates of several first order methods for solving smooth non-strongly convex constrained optimization problems, i.e. involving an objective function with a Lipschitz continuous gradient that satisfies some relaxed strong convexity condition.
Local behavior of an iterative framework for generalized equations with nonisolated solutions
TLDR
Results deal with error bounds and upper Lipschitz-continuity properties for these problems, including monotone mixed complementarity problems, Karush-Kuhn-Tucker systems arising from nonlinear programs, and nonlinear equations.
An inexact successive quadratic approximation method for L-1 regularized optimization
TLDR
The inexactness conditions are based on a semi-smooth function that represents a (continuous) measure of the optimality conditions of the problem, and that embodies the soft-thresholding iteration.
...
...