Rank Minimization or Nuclear-Norm Minimization: Are We Solving the Right Problem?

@article{Dai2014RankMO,
  title={Rank Minimization or Nuclear-Norm Minimization: Are We Solving the Right Problem?},
  author={Yuchao Dai and Hongdong Li},
  journal={2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)},
  year={2014},
  pages={1-8}
}
  • Yuchao Dai, Hongdong Li
  • Published 1 November 2014
  • Computer Science
  • 2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)
Low rank method or rank-minimization has received considerable attention from recent computer vision community. Due to the inherent computational complexity of rank problems, the non-convex rank function is often relaxed to its convex relaxation, i.e. the nuclear norm. Thanks to recent progress made in the filed of compressive sensing (CS), vision researchers who are practicing CS are fully aware, and conscious, of the convex relaxation gap, as well as under which condition (e.g. Restricted… 

Figures and Tables from this paper

Computational Methods for Computer Vision: Minimal Solvers and Convex Relaxations
TLDR
New convex relaxations for rank-based optimization which avoid drawbacks of previous approaches and provide tighter relaxations are presented.
Global Optimality in Inductive Matrix Completion
TLDR
It is shown that the critical points of the objective function of this problem are either global minima that correspond to the true solution or are “escapable” saddle points, which implies that any minimization algorithm with guaranteed convergence to a local minimum can be used for solving the factorized IMC problem.
Weighted Nuclear Norm and TV Regularization based Image Deraining
Often, images captured by digital camera in outdoor vision system may be significantly distorted by bad weather conditions. Such visual distortions may negatively affect the performance of the
Compact Matrix Factorization with Dependent Subspaces
TLDR
This paper proposes a new factorization model that further constrains the matrix entries and shows qualitatively and quantitatively that regularizing both local and global dynamics yields significantly improved missing data estimation.
Dynamic Behavior Analysis via Structured Rank Minimization
TLDR
A novel structured rank minimization method and its scalable variant are proposed and the generalizability of the proposed framework is demonstrated by conducting experiments on 3 distinct dynamic behavior analysis tasks, whereby the attained results outperform those achieved by other state-of-the-art methods for these tasks.
Extended target detection amid clutter suppression using the combination of the sparsity and total variation
TLDR
Results show that the proposed approach outperforms the current compressed sensing techniques and the fully sampled, nonadaptive matched filter in detecting extended targets in the presence of strong clutter.
Metric Learning via Linear Embeddings for Human Motion Recognition
  • B. Kong
  • Computer Science, Engineering
  • 2020
TLDR
This research presents a novel approach called “Smart grids” that combines smart grids and magnetism to solve the challenge of how to combine the two.

References

SHOWING 1-10 OF 34 REFERENCES
A Counterexample for the Validity of Using Nuclear Norm as a Convex Surrogate of Rank
TLDR
It is argued that such a validity checking cannot be done by numerical computation and it is shown, by analyzing the noiseless latent low rank representation (LatLRR) model, that even for very simple rank minimization problems the validity may still break down.
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
TLDR
It is shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.
Optimal exact least squares rank minimization
TLDR
A nonconvex least squares formulation, which seeks to minimize the least squares loss function with the rank constraint, and shows that it recovers the true rank optimally against any method and leads to sharper parameter estimation over its counterpart.
The generalized trace-norm and its application to structure-from-motion problems
TLDR
The generalized trace norm is presented, which allows to encode prior knowledge about a specific problem into a convex regularization term which enforces a low rank solution while at the same time taking the problem structure into account, and achieves its goals in providing a problem-dependent regularization.
Limitations of matrix completion via trace norm minimization
TLDR
The issues related to trace norm minimization are analyzed and it is found an unexpected result that tracenorm minimization often does not work as well as expected.
New Null Space Results and Recovery Thresholds for Matrix Rank Minimization
TLDR
The resulting thresholds are significantly better and in particular the weak threshold appears to match with simulation results, and curves suggest for any rank growing linearly with matrix size n the authors need only three times of oversampling for weak recovery.
Robust Recovery of Subspace Structures by Low-Rank Representation
TLDR
It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, it is proved that under certain conditions LRR can exactly recover the row space of the original data.
Robust principal component analysis?
TLDR
It is proved that under some suitable assumptions, it is possible to recover both the low-rank and the sparse components exactly by solving a very convenient convex program called Principal Component Pursuit; among all feasible decompositions, this suggests the possibility of a principled approach to robust principal component analysis.
RASL: Robust alignment by sparse and low-rank decomposition for linearly correlated images
TLDR
This paper reduces this extremely challenging optimization problem to a sequence of convex programs that minimize the sum of ℓ1-norm and nuclear norm of the two component matrices, which can be efficiently solved by scalable convex optimization techniques with guaranteed fast convergence.
Robust Low-Rank Subspace Segmentation with Semidefinite Guarantees
TLDR
It is advocated to enforce the symmetric positive semi definite constraint explicitly during learning (Low-Rank Representation with Positive Semi Definite constraint, or LRR-PSD), and it is shown that factually it can be solved in an exquisite scheme efficiently instead of general-purpose SDP solvers that usually scale up poorly.
...
...