• Corpus ID: 1939560

Nonconvex Relaxation Approaches to Robust Matrix Recovery

  title={Nonconvex Relaxation Approaches to Robust Matrix Recovery},
  author={Shusen Wang and Dehua Liu and Zhihua Zhang},
Motivated by the recent developments of nonconvex penalties in sparsity modeling, we propose a nonconvex optimization model for handing the low-rank matrix recovery problem. Different from the famous robust principal component analysis (RPCA), we suggest recovering low-rank and sparse matrices via a nonconvex loss function and a nonconvex penalty. The advantage of the nonconvex approach lies in its stronger robustness. To solve the model, we devise a majorization-minimization augmented Lagrange… 

Figures and Tables from this paper

An Alternating Direction Method with Continuation for Nonconvex Low Rank Minimization
Comprehensive numerical experiments show that the proposed nonconvex model and the ADM algorithm are competitive with the state-of-the-art models and algorithms in terms of efficiency and accuracy.
A Generalized Robust Minimization Framework for Low-Rank Matrix Recovery
The problem of recovering low-rank matrices which are heavily corrupted by outliers or large errors is solved by formulating it as a generalized nonsmooth nonconvex minimization functional via exploiting the Schatten -norm and seminorm.
Fast Low-Rank Matrix Learning with Nonconvex Regularization
This paper shows that for many commonly-used nonconvex low-rank regularizers, a cutoff can be derived to automatically threshold the singular values obtained from the proximal operator, which allows the use of power method to approximate the SVD efficiently.
Efficient Recovery of Low-Rank Matrix via Double Nonconvex Nonsmooth Rank Minimization
A general and flexible rank relaxation function named weighted NNR relaxation function, which is actually derived from the initial double NNR (DNNR) relaxations, i.e., DNNR relaxation function acts on the nonconvex singular values function (SVF).
Nonconvex plus quadratic penalized low-rank and sparse decomposition for noisy image alignment
This paper exploits the local linear approximation (LLA) method for turning the resulting nonconvex penalization problem into a series of weighted convex Penalization problems and these subproblems are efficiently solved via the augmented Lagrange multiplier (ALM).
A Nearly Unbiased Matrix Completion Approach
This work derives a shrinkage operator, which is nearly unbiased in comparison with the well-known soft shrinkage operators, and devise two algorithms, non-convex soft imputation (NCSI) and non- Convex alternative direction method of multipliers (NCADMM), to fulfil the numerical estimation.
Non-convex Penalty for Tensor Completion and Robust PCA
The proposed non-convex penalties in tensor recovery problems such as tensor completion and tensor robust principal component analysis are employed, which has various real applications such as image inpainting and denoising.
Improved sparse low-rank matrix estimation
Non-convex Rank/Sparsity Regularization and Local Minima
The main theoretical results show that if a RIP holds then the stationary points are often well separated, in the sense that their differences must be of high cardinality/rank, and the approach is likely to converge to a better solution than standard ℓ1/nuclear-norm relaxation even when starting from trivial initializations.


Analysis of Multi-stage Convex Relaxation for Sparse Regularization
  • Tong Zhang
  • Computer Science
    J. Mach. Learn. Res.
  • 2010
A multi-stage convex relaxation scheme for solving problems with non-convex objective functions with sparse regularization is presented and it is shown that the local solution obtained by this procedure is superior to the global solution of the standard L1 conveX relaxation for learning sparse targets.
A non-convex relaxation approach to sparse dictionary learning
This work treats a so-called minimax concave penalty as a nonconvex relaxation of the ℓ0 penalty to achieve the sparseness, and employs an online algorithm to adaptively learn the dictionary, which makes the non- Convex formulation computationally feasible.
Efficient Sparse Group Feature Selection via Nonconvex Optimization
A nonconvex sparse group feature selection model is introduced and an efficient optimization algorithm is presented, of which the key step is a projection with two coupled constraints, so that consistent feature selection and parameter estimation can be achieved.
SpaRCS: Recovering low-rank and sparse matrices from compressive measurements
This work proposes a natural optimization problem for signal recovery under this model and develops a new greedy algorithm called SpaRCS to solve it, which inherits a number of desirable properties from the state-of-the-art CoSaMP and ADMiRA algorithms.
The potential of coordinate descent algorithms for fitting models, establishing theoretical convergence properties and demonstrating that they are significantly faster than competing approaches are demonstrated, and the numerical results suggest that MCP is the preferred approach among the three methods.
MATRIX ALPS: Accelerated low rank and sparse matrix reconstruction
This work theoretically characterize the convergence properties of MATRIX ALPS and numerically illustrate that the algorithm outperforms the existing convex as well as non-convex state-of-the-art algorithms in computational efficiency without sacrificing stability.
Nonconvex Penalization Using Laplace Exponents and Concave Conjugates
It is shown that the nonconvex logarithmic and exponential penalty functions are the Laplace exponents of Gamma and compound Poisson subordinators, respectively, and the relationship between these two penalties is due to asymmetricity of the KL distance.
Group coordinate descent algorithms for nonconvex penalized regression
A Singular Value Thresholding Algorithm for Matrix Completion
This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.