• Corpus ID: 254096062

Newton Method with Variable Selection by the Proximal Gradient Method

@inproceedings{Shimmura2022NewtonMW,
  title={Newton Method with Variable Selection by the Proximal Gradient Method},
  author={Ryosuke Shimmura and Joe Suzuki},
  year={2022}
}
In sparse estimation, in which the sum of the loss function and the regularization term is minimized, methods such as the proximal gradient method and the proximal Newton method are applied. The former is slow to converge to a solution, while the latter converges quickly but is inefficient for problems such as group lasso problems. In this paper, we examine how to efficiently find a solution by finding the convergence destination of the proximal gradient method. However, the case in which the… 

Figures from this paper

References

SHOWING 1-10 OF 12 REFERENCES

A Regularized Semi-Smooth Newton Method with Projection Steps for Composite Convex Programs

This paper proposes an adaptive semi-smooth Newton method and establishes its convergence to global optimality by combining with a regularization approach and a known hyperplane projection technique to develop second-order type methods for composite convex programs.

Forward–backward quasi-Newton methods for nonsmooth optimization problems

This work proposes an algorithmic scheme that enjoys the same global convergence properties of FBS when the problem is convex, or when the objective function possesses the Kurdyka–Łojasiewicz property at its critical points, and analysis of superlinear convergence is based on an extension of the Dennis and Moré theorem.

A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems

A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.

Newton and Quasi-Newton Methods for a Class of Nonsmooth Equations and Related Problems

The Q-superlinear convergence of the Newton method and the quasi-Newton method are established under suitable assumptions, in which the existence of F'(x*) is not assumed, and new algorithms only need to solve a linear equation in each step.

Proximal Newton methods for convex composite optimization

Two proximal Newton methods for convex nonsmooth optimization problems in composite form are proposed, which combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates.

Semismooth Newton Methods for Variational Inequalities and Constrained Optimization Problems in Function Spaces

  • M. Ulbrich
  • Mathematics
    MOS-SIAM Series on Optimization
  • 2011
The author covers adjoint-based derivative computation and the efficient solution of Newton systems by multigrid and preconditioned iterative methods.

Forward-backward truncated Newton methods for convex composite optimization

Two proximal Newton-CG methods for convex nonsmooth optimization problems in composite form are proposed, which combines the global efficiency estimates of the corresponding first-order methods, while achieving fast asymptotic convergence rates.

Regression Shrinkage and Selection via the Lasso

A new method for estimation in linear models called the lasso, which minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant, is proposed.

Model selection and estimation in regression with grouped variables

Summary.  We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor

Convex Analysis and Monotone Operator Theory in Hilbert Spaces

This book provides a largely self-contained account of the main results of convex analysis and optimization in Hilbert space. A concise exposition of related constructive fixed point theory is