Non-convex Rank/Sparsity Regularization and Local Minima

  title={Non-convex Rank/Sparsity Regularization and Local Minima},
  author={Carl Olsson and Marcus Carlsson and Fredrik Andersson and Viktor Larsson},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
This paper considers the problem of recovering either a low rank matrix or a sparse vector from observations of linear combinations of the vector or matrix elements. Recent methods replace the non-convex regularization with ℓ1 or nuclear norm relaxations. It is well known that this approach recovers near optimal solutions if a so called restricted isometry property (RIP) holds. On the other hand it also has a shrinking bias which can degrade the solution. In this paper we study an alternative… 
A Non-convex Relaxation for Fixed-Rank Approximation
It is shown that despite its non-convexity the proposed formulation will in many cases have a single stationary point if a RIP holds and typically converges to a better solution than nuclear norm based alternatives even in cases when the RIP does not hold.
Matrix Completion Based on Non-Convex Low-Rank Approximation
It is shown that the proposed regularizer as well as the optimization method are suitable for other RM problems, such as subspace clustering based on low-rank representation, which can achieve faster convergence speed compared to conventional approaches.
An un-biased approach to low rank recovery.
This paper characterize the critical points and give sufficient conditions for a low rank stationary point to be unique and derive conditions that ensure global optimality of the low ranks stationary point and show that these hold under moderate noise levels.
Bias Versus Non-Convexity in Compressed Sensing
Cardinality and rank functions are ideal ways of regularizing under-determined linear systems, but optimization of the resulting formulations is made difficult since both these penalties are
Bilinear Parameterization For Differentiable Rank-Regularization
This paper shows how many non-differentiable regularization methods can be reformulated into smooth objectives using bilinear parameterization and shows that this second order formulation converges to substantially more accurate solutions than competing state-of-the-art methods.
On Convex Envelopes and Regularization of Non-convex Functionals Without Moving Global Minima
For optimization problems where the $$\ell ^2$$ℓ2-term contains a singular matrix, it is proved that the regularizations never move the global minima.
Differentiable Fixed-Rank Regularisation using Bilinear Parameterisation
It is shown how optimality guarantees can be lifted to methods that employ bilinear parameterisation when the sought rank is known, and compared to state-of-the-art solvers for prior-free non-rigid structure from motion.
Bilinear Parameterization for Non-Separable Singular Value Penalties
This work proposes a method using second order methods, in particular the variable projection method (VarPro), by replacing the nonconvex penalties with a surrogate capable of converting the original objectives to differentiable equivalents, in this way benefiting from faster convergence.
Relaxations for Non-Separable Cardinality/Rank Penalties
This paper presents a class of non-separable penalties and gives a recipe for computing strong relaxations suitable for optimization and shows how a stationary point can be guaranteed to be unique under the restricted isometry property (RIP) assumption.
Bias Reduction in Compressed Sensing
This paper combines recently developed bias free non-convex alternatives with the nuclear- and $\ell^1-$penalties to develop an efficient minimization scheme using derived proximal operators and evaluates the method on several real and synthetic computer vision applications with promising results.


Convex Low Rank Approximation
This paper proposes a convex formulation that is more flexible in that it can be combined with any other convex constraints and penalty functions and shows that for a general class of problems the envelope can be efficiently computed and may in some cases even have a closed form expression.
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
It is shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.
Exact matrix completion via convex optimization
It is proved that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries, and that objects other than signals and images can be perfectly reconstructed from very limited information.
Simultaneously Structured Models With Application to Sparse and Low-Rank Matrices
This framework applies to arbitrary structure-inducing norms as well as to a wide range of measurement ensembles, and allows us to give sample complexity bounds for problems such as sparse phase retrieval and low-rank tensor completion.
A General Iterative Shrinkage and Thresholding Algorithm for Non-convex Regularized Optimization Problems
A General Iterative Shrinkage and Thresholding (GIST) algorithm to solve the nonconvex optimization problem for a large class of non-conveX penalties and a detailed convergence analysis of the GIST algorithm is presented.
Generalized Nonconvex Nonsmooth Low-Rank Minimization
In theory, it is proved that IRNN decreases the objective function value monotonically, and any limit point is a stationary point, which enhances the low-rank matrix recovery compared with state-of-the-art convex algorithms.
Iterative reweighted least squares for matrix rank minimization
  • Karthika Mohan, M. Fazel
  • Computer Science
    2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2010
This paper extends IRLS-p as a family of algorithms for the matrix rank minimization problem and presents a relatedfamily of algorithms, sIRLS- p, which performs better than algorithms such as Singular Value Thresholding on a range of ‘hard’ problems (where the ratio of number of degrees of freedom in the variable to the number of measurements is large).
Enhancing Sparsity by Reweighted ℓ1 Minimization
A novel method for sparse signal recovery that in many situations outperforms ℓ1 minimization in the sense that substantially fewer measurements are needed for exact recovery.
Nonconvex Relaxation Approaches to Robust Matrix Recovery
A nonconvex optimization model for handing the low-rank matrix recovery problem and an efficient strategy to speedup MM-ALM, which makes the running time comparable with the state-of-the-art algorithm of solving RPCA.
A simplified approach to recovery conditions for low rank matrices
This paper shows how several classes of recovery conditions can be extended from vectors to matrices in a simple and transparent way, leading to the best known restricted isometry and nullspace conditions for matrix recovery.