# Iterative hard thresholding methods for $$l_0$$l0 regularized convex cone programming

@article{Lu2014IterativeHT, title={Iterative hard thresholding methods for \$\$l\_0\$\$l0 regularized convex cone programming}, author={Zhaosong Lu}, journal={Mathematical Programming}, year={2014}, volume={147}, pages={125-154} }

In this paper we consider $$l_0$$l0 regularized convex cone programming problems. In particular, we first propose an iterative hard thresholding (IHT) method and its variant for solving $$l_0$$l0 regularized box constrained convex programming. We show that the sequence generated by these methods converges to a local minimizer. Also, we establish the iteration complexity of the IHT method for finding an $${{\epsilon }}$$ϵ-local-optimal solution. We then propose a method for solving $$l_0$$l0…

## Topics from this paper

## 83 Citations

Random Coordinate Descent Methods for $\ell_{0}$ Regularized Convex Optimization

- Mathematics, Computer ScienceIEEE Transactions on Automatic Control
- 2015

This paper analyzes the convergence properties of an iterative hard thresholding based random coordinate descent algorithm and shows that any limit point of this algorithm is a local minima from the second restricted class of local minimizers.

Newton method for $\ell_0$-regularized optimization.

- Mathematics
- 2020

As a tractable approach, regularization is frequently adopted in sparse optimization. This gives rise to the regularized optimization, aiming at minimizing the $\ell_0$ norm or its continuous…

Fast Best Subset Selection: Coordinate Descent and Local Combinatorial Optimization Algorithms

- Computer Science, MathematicsOper. Res.
- 2020

This paper empirically demonstrate that a family of L_0-based estimators can outperform the state-of-the-art sparse learning algorithms in terms of a combination of prediction, estimation, and variable selection metrics under various regimes (e.g., different signal strengths, feature correlations, number of samples and features).

Smoothing fast iterative hard thresholding algorithm for $\ell_0$ regularized nonsmooth convex regression problem

- Mathematics
- 2021

We investigate a class of constrained sparse regression problem with cardinality penalty, where the feasible set is defined by box constraint, and the loss function is convex, but not necessarily…

A Note on the Complexity of Proximal Iterative Hard Thresholding Algorithm

- Mathematics
- 2015

The iterative hard thresholding (IHT) algorithm is a powerful and efficient algorithm for solving $$\ell _0$$ℓ0-regularized problems and inspired many applications in sparse-approximation and…

Iteration complexity analysis of random coordinate descent methods for $\ell_0$ regularized convex problems

- Mathematics
- 2014

In this paper we analyze a family of general random block coordinate descent methods for the minimization of $\ell_0$ regularized optimization problems, i.e. the objective function is composed of a…

A New Proximal Iterative Hard Thresholding Method with Extrapolation for $$\ell _0$$ℓ0 Minimization

- Computer Science, MathematicsJ. Sci. Comput.
- 2019

This paper proposes one proximal iterative hard thresholding type method with an extrapolation step for acceleration with global convergence results that globally converges to a local minimizer of the objective function.

Optimization over Sparse Symmetric Sets via a Nonmonotone Projected Gradient Method

- Mathematics, Computer Science
- 2015

The computational results demonstrate that NPG has substantially better solution quality than PG, and moreover, it is at least comparable to, but sometimes can be much faster than PG in terms of speed.

Lagrange Dual Method for Sparsity Constrained Optimization

- Computer ScienceIEEE Access
- 2018

It is shown that the proposed Lagrange dual method for the sparsity constrained optimization problem converges to an <inline-formula> <tex-math notation="LaTeX">$L$ </tex- math></inline- formula>-stationary point of the primal problem.

A Homotopy Coordinate Descent Optimization Method for l0-Norm Regularized Least Square Problem

- Computer ScienceArXiv
- 2020

Computational experiments demonstrate effectiveness of the proposed homotopy coordinate descent method, in accurately and efficiently reconstructing sparse solutions of the $l_0$-LS problem, whether the observation is noisy or not.

## References

SHOWING 1-10 OF 31 REFERENCES

Sparse Approximation via Penalty Decomposition Methods

- Mathematics, Computer ScienceSIAM J. Optim.
- 2013

This paper considers sparse approximation problems, that is, general minimization problems with the $l_0$-"norm" of a vector being a part of constraints or objective function, and proposes penalty decomposition methods for solving them in which a sequence of penalty subproblems are solved by a block coordinate descent method.

Hard Thresholding Pursuit: An Algorithm for Compressive Sensing

- Mathematics, Computer ScienceSIAM J. Numer. Anal.
- 2011

A new iterative algorithm to find sparse solutions of underdetermined linear systems is introduced and it is shown that, under a certain condition on the restricted isometry constant of the matrix of the linear system, the Hard Thresholding Pursuit algorithm indeed finds all $s$-sparse solutions.

Iteration-complexity of first-order penalty methods for convex programming

- Mathematics, Computer ScienceMath. Program.
- 2013

This paper studies the computational complexity of quadratic penalty based methods for solving a special but broad class of convex programming problems whose feasible region is a simple compact convex set intersected with the inverse image of a closed convex cone under an affine transformation.

A Singular Value Thresholding Algorithm for Matrix Completion

- Computer Science, MathematicsSIAM J. Optim.
- 2010

This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.

Guaranteed Rank Minimization via Singular Value Projection

- Computer Science, MathematicsNIPS
- 2010

Results show that the SVP-Newton method is significantly robust to noise and performs impressively on a more realistic power-law sampling scheme for the matrix completion problem.

Normalized Iterative Hard Thresholding: Guaranteed Stability and Performance

- Mathematics, Computer ScienceIEEE Journal of Selected Topics in Signal Processing
- 2010

With this modification, empirical evidence suggests that the algorithm is faster than many other state-of-the-art approaches while showing similar performance, and the modified algorithm retains theoretical performance guarantees similar to the original algorithm.

Iterative Thresholding for Sparse Approximations

- Mathematics
- 2008

Sparse signal expansions represent or approximate a signal using a small number of elements from a large collection of elementary waveforms. Finding the optimal sparse expansion is known to be NP…

Greedy sparsity-constrained optimization

- Computer Science, Mathematics2011 Conference Record of the Forty Fifth Asilomar Conference on Signals, Systems and Computers (ASILOMAR)
- 2011

This paper presents a greedy algorithm, dubbed Gradient Support Pursuit (GraSP), for sparsity-constrained optimization, and quantifiable guarantees are provided for GraSP when cost functions have the “Stable Hessian Property”.

Introductory Lectures on Convex Optimization - A Basic Course

- Computer ScienceApplied Optimization
- 2004

It was in the middle of the 1980s, when the seminal paper by Kar markar opened a new epoch in nonlinear optimization, and it became more and more common that the new methods were provided with a complexity analysis, which was considered a better justification of their efficiency than computational experiments.

Greed is good: algorithmic results for sparse approximation

- Mathematics, Computer ScienceIEEE Transactions on Information Theory
- 2004

This article presents new results on using a greedy algorithm, orthogonal matching pursuit (OMP), to solve the sparse approximation problem over redundant dictionaries and develops a sufficient condition under which OMP can identify atoms from an optimal approximation of a nonsparse signal.