Proximal Mapping for Symmetric Penalty and Sparsity

@article{Beck2018ProximalMF,
  title={Proximal Mapping for Symmetric Penalty and Sparsity},
  author={Amir Beck and Nadav Hallak},
  journal={SIAM J. Optim.},
  year={2018},
  volume={28},
  pages={496-527}
}
This paper studies a class of problems consisting of minimizing a continuously differentiable function penalized with the so-called $\ell_0$-norm over a symmetric set. These problems are hard to solve, yet prominent in many fields and applications. We first study the proximal mapping with respect to the $\ell_0$-norm over symmetric sets, and provide an efficient method to attain it. The method is then improved for symmetric sets satisfying a sub-modularity-like property, which we call second… 

Figures and Tables from this paper

Newton method for $\ell_0$-regularized optimization.
TLDR
This paper develops a Newton-type method for the $\ell_0$-regularized optimization and proves that the generated sequence converges to a stationary point globally and quadratically under the standard assumptions, theoretically explaining that this method is able to perform surprisingly well.
Subspace Newton method for the $\ell_0$-regularized optimization
TLDR
A method SNL0: subspace Newton method is designed and proved that its generated sequence converges to a stationary point globally under the strong smoothness condition, and a novel mechanism to effectively update the penalty parameter is created, which allows to get rid of the tedious parameter tuning task that is suffered by most regularized optimization methods.
Newton method for ℓ0-regularized optimization
TLDR
A Newton-type method is developed for the l0-regularized optimization and it is proved that the generated sequence converges to a stationary point globally and quadratically under the standard assumptions, theoretically explaining that this method can perform surprisingly well.
Optimization problems involving group sparsity terms
This paper studies a general form problem in which a lower bounded continuously differentiable function is minimized over a block separable set incorporating a group sparsity expression as a
Inertial Proximal Block Coordinate Method for a Class of Nonsmooth and Nonconvex Sum-of-Ratios Optimization Problems
TLDR
This paper considers a class of nonsmooth and nonconvex sum-of-ratios fractional optimization problems with block structure and proposes an inertial proximal block coordinate method for solving them by exploiting the block structure of the underlying model.
Projected Neural Network for a Class of Sparse Regression with Cardinality Penalty
TLDR
This paper proposes a projected neural network and designs a correction method for solving a class of sparse regression problems, whose objective function is the summation of a convex loss function and a cardinality penalty.
New Insights on the Optimality Conditions of the ℓ 2-ℓ 0 Minimization Problem
TLDR
This paper provides a comprehensive review of commonly used necessary optimality conditions as well as known relationships between them and completes this hierarchy of conditions by proving new inclusion properties between the sets of candidate solutions associated to them.
A Second Order Algorithm for MCP Regularized Optimization
TLDR
Two optimal property of MCP regularization optimization are provided: one shows that the support set of a local minimizer corresponds to linearly independent columns of $A$, the other provides two sufficient conditions for a stationary point to be aLocal minimizer point.
Inertial Proximal Block Coordinate Method for a Class of Nonsmooth Sum-of-Ratios Optimization Problems
TLDR
This paper proposes an inertial proximal block coordinate method for solving a class of nonsmooth sum-of-ratios fractional optimization problems with block structure and identifies the explicit exponents of the KL property for three important structured fractional optimize problems.
Cardinality Minimization, Constraints, and Regularization: A Survey
TLDR
It is highlighted that modern mixed-integer programming can in fact produce provably high-quality or even optimal solutions for cardinality optimization problems, even in large-scale real-world settings.
...
...

References

SHOWING 1-10 OF 32 REFERENCES
Sparse Approximation via Penalty Decomposition Methods
TLDR
This paper considers sparse approximation problems, that is, general minimization problems with the $l_0$-"norm" of a vector being a part of constraints or objective function, and proposes penalty decomposition methods for solving them in which a sequence of penalty subproblems are solved by a block coordinate descent method.
On the Minimization Over Sparse Symmetric Sets: Projections, Optimality Conditions, and Algorithms
TLDR
This work considers the problem of minimizing a general continuously differentiable function over symmetric sets under sparsity constraints, and derives efficient methods for computing sparse projections under various symmetry assumptions.
Reweighted 1-Minimization for Sparse Solutions to Underdetermined Linear Systems
TLDR
The concept of the range space property (RSP) of a matrix is introduced and it is proved that if its adjoint has this property, the reweighted $\ell_1$-algorithm can find a sparse solution to the underdetermined linear system provided that the merit function for sparsity is properly chosen.
Sparsity Constrained Nonlinear Optimization: Optimality Conditions and Algorithms
This paper treats the problem of minimizing a general continuously differentiable function subject to sparsity constraints. We present and analyze several different optimality criteria which are
Proximal alternating linearized minimization for nonconvex and nonsmooth problems
TLDR
A self-contained convergence analysis framework is derived and it is established that each bounded sequence generated by PALM globally converges to a critical point.
Iterative Thresholding for Sparse Approximations
TLDR
This paper studies two iterative algorithms that are minimising the cost functions of interest and adapts the algorithms and shows on one example that this adaptation can be used to achieve results that lie between those obtained with Matching Pursuit and those found with Orthogonal Matching pursuit, while retaining the computational complexity of the Matching pursuit algorithm.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images
TLDR
The aim of this paper is to introduce a few key notions and applications connected to sparsity, targeting newcomers interested in either the mathematical aspects of this area or its applications.
A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
TLDR
A new fast iterative shrinkage-thresholding algorithm (FISTA) which preserves the computational simplicity of ISTA but with a global rate of convergence which is proven to be significantly better, both theoretically and practically.
Just relax: convex programming methods for identifying sparse signals in noise
  • J. Tropp
  • Computer Science
    IEEE Transactions on Information Theory
  • 2006
TLDR
A method called convex relaxation, which attempts to recover the ideal sparse signal by solving a convex program, which can be completed in polynomial time with standard scientific software.
Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization
  • D. Donoho, Michael Elad
  • Computer Science
    Proceedings of the National Academy of Sciences of the United States of America
  • 2003
TLDR
This article obtains parallel results in a more general setting, where the dictionary D can arise from two or several bases, frames, or even less structured systems, and sketches three applications: separating linear features from planar ones in 3D data, noncooperative multiuser encoding, and identification of over-complete independent component models.
...
...