Non-convex Rank/Sparsity Regularization and Local Minima

@article{Olsson2017NonconvexRR,
  title={Non-convex Rank/Sparsity Regularization and Local Minima},
  author={Carl Olsson and Marcus Carlsson and Fredrik Andersson and Viktor Larsson},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
  year={2017},
  pages={332-340}
}
This paper considers the problem of recovering either a low rank matrix or a sparse vector from observations of linear combinations of the vector or matrix elements. Recent methods replace the non-convex regularization with ℓ1 or nuclear norm relaxations. It is well known that this approach recovers near optimal solutions if a so called restricted isometry property (RIP) holds. On the other hand it also has a shrinking bias which can degrade the solution. In this paper we study an alternative… 
A Non-convex Relaxation for Fixed-Rank Approximation
TLDR
It is shown that despite its non-convexity the proposed formulation will in many cases have a single stationary point if a RIP holds and typically converges to a better solution than nuclear norm based alternatives even in cases when the RIP does not hold.
Matrix Completion Based on Non-Convex Low-Rank Approximation
TLDR
It is shown that the proposed regularizer as well as the optimization method are suitable for other RM problems, such as subspace clustering based on low-rank representation, which can achieve faster convergence speed compared to conventional approaches.
An un-biased approach to low rank recovery.
TLDR
This paper characterize the critical points and give sufficient conditions for a low rank stationary point to be unique and derive conditions that ensure global optimality of the low ranks stationary point and show that these hold under moderate noise levels.
Bias Versus Non-Convexity in Compressed Sensing
Cardinality and rank functions are ideal ways of regularizing under-determined linear systems, but optimization of the resulting formulations is made difficult since both these penalties are
Bilinear Parameterization For Differentiable Rank-Regularization
TLDR
This paper shows how many non-differentiable regularization methods can be reformulated into smooth objectives using bilinear parameterization and shows that this second order formulation converges to substantially more accurate solutions than competing state-of-the-art methods.
On Convex Envelopes and Regularization of Non-convex Functionals Without Moving Global Minima
TLDR
For optimization problems where the $$\ell ^2$$ℓ2-term contains a singular matrix, it is proved that the regularizations never move the global minima.
Differentiable Fixed-Rank Regularisation using Bilinear Parameterisation
TLDR
It is shown how optimality guarantees can be lifted to methods that employ bilinear parameterisation when the sought rank is known, and compared to state-of-the-art solvers for prior-free non-rigid structure from motion.
Bilinear Parameterization for Non-Separable Singular Value Penalties
TLDR
This work proposes a method using second order methods, in particular the variable projection method (VarPro), by replacing the nonconvex penalties with a surrogate capable of converting the original objectives to differentiable equivalents, in this way benefiting from faster convergence.
Relaxations for Non-Separable Cardinality/Rank Penalties
TLDR
This paper presents a class of non-separable penalties and gives a recipe for computing strong relaxations suitable for optimization and shows how a stationary point can be guaranteed to be unique under the restricted isometry property (RIP) assumption.
Bias Reduction in Compressed Sensing
TLDR
This paper combines recently developed bias free non-convex alternatives with the nuclear- and $\ell^1-$penalties to develop an efficient minimization scheme using derived proximal operators and evaluates the method on several real and synthetic computer vision applications with promising results.
...
...

References

SHOWING 1-10 OF 35 REFERENCES
Convex Low Rank Approximation
TLDR
This paper proposes a convex formulation that is more flexible in that it can be combined with any other convex constraints and penalty functions and shows that for a general class of problems the envelope can be efficiently computed and may in some cases even have a closed form expression.
Simultaneously Structured Models With Application to Sparse and Low-Rank Matrices
TLDR
This framework applies to arbitrary structure-inducing norms as well as to a wide range of measurement ensembles, and allows us to give sample complexity bounds for problems such as sparse phase retrieval and low-rank tensor completion.
A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems
TLDR
A General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-conveX penalties and a detailed convergence analysis of the GIST algorithm is presented.
Generalized Nonconvex Nonsmooth Low-Rank Minimization
TLDR
In theory, it is proved that IRNN decreases the objective function value monotonically, and any limit point is a stationary point, which enhances the low-rank matrix recovery compared with state-of-the-art convex algorithms.
Iterative reweighted least squares for matrix rank minimization
  • Karthika Mohan, M. Fazel
  • Computer Science
    2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2010
TLDR
This paper extends IRLS-p as a family of algorithms for the matrix rank minimization problem and presents a relatedfamily of algorithms, sIRLS- p, which performs better than algorithms such as Singular Value Thresholding on a range of ‘hard’ problems (where the ratio of number of degrees of freedom in the variable to the number of measurements is large).
Enhancing Sparsity by Reweighted ℓ1 Minimization
TLDR
A novel method for sparse signal recovery that in many situations outperforms ℓ1 minimization in the sense that substantially fewer measurements are needed for exact recovery.
Nonconvex Relaxation Approaches to Robust Matrix Recovery
TLDR
A nonconvex optimization model for handing the low-rank matrix recovery problem and an efficient strategy to speedup MM-ALM, which makes the running time comparable with the state-of-the-art algorithm of solving RPCA.
A simplified approach to recovery conditions for low rank matrices
TLDR
This paper shows how several classes of recovery conditions can be extended from vectors to matrices in a simple and transparent way, leading to the best known restricted isometry and nullspace conditions for matrix recovery.
Robust principal component analysis?
TLDR
It is proved that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, this suggests the possibility of a principled approach to robust principal component analysis.
On convexification/optimization of functionals including an l2-misfit term
TLDR
For functionals where the l2 misfit includes a singular matrix and where the convex envelope usually is not explicitly computable, the theory provides theory for how minimizers of (explicitly computable) approximations of the conveX envelope relate to minimizer of the original functional.
...
...