The Power of Convex Relaxation: Near-Optimal Matrix Completion

@article{Cands2010ThePO,
  title={The Power of Convex Relaxation: Near-Optimal Matrix Completion},
  author={Emmanuel J. Cand{\`e}s and Terence Tao},
  journal={IEEE Transactions on Information Theory},
  year={2010},
  volume={56},
  pages={2053-2080}
}
  • E. Candès, T. Tao
  • Published 9 March 2009
  • Mathematics, Computer Science
  • IEEE Transactions on Information Theory
This paper is concerned with the problem of recovering an unknown matrix from a small fraction of its entries. This is known as the matrix completion problem, and comes up in a great number of applications, including the famous Netflix Prize and other similar questions in collaborative filtering. In general, accurate recovery of a matrix from a small number of entries is impossible, but the knowledge that the unknown matrix has low rank radically changes this premise, making the search for… 
Accurate low-rank matrix recovery from a small number of linear measurements
  • E. Candès, Y. Plan
  • Mathematics, Computer Science
    2009 47th Annual Allerton Conference on Communication, Control, and Computing (Allerton)
  • 2009
TLDR
It is shown that for a certain class of random linear measurements, nuclear-norm minimization provides stable recovery from a number of samples nearly at the theoretical lower limit, and enjoys order-optimal error bounds (with high probability).
Fast Exact Matrix Completion with Finite Samples
TLDR
This paper presents a fast iterative algorithm that solves the matrix completion problem by observing $O(nr^5 \log^3 n)$ entries, which is independent of the condition number and the desired accuracy, and is the first near linear time algorithm for exact matrix completion with finite sample complexity.
Computational Limits for Matrix Completion
TLDR
This paper proves that Matrix Completion remains computationally intractable even if the unknown matrix has rank $4$ but the algorithm is allowed to output any constant rank matrix, and gives the first complexity-theoretic justification for why distributional assumptions are needed beyond the incoherence assumption in order to obtain positive results.
Matrix Completion From any Given Set of Observations
TLDR
A new way to interpret the output of this algorithm is given by next finding a probability distribution over the non-revealed entries with respect to which a bound on the generalization error can be proven.
Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization
TLDR
It is demonstrated that when the rank and the condition number of the unknown matrix are bounded by a constant, the convex programming approach achieves near-optimal estimation errors - in terms of the Euclidean loss, the entrywise loss, and the spectral norm loss - for a wide range of noise levels.
Noisy Matrix Completion : Understanding the Stability of Convex Relaxation via Nonconvex Optimization
This paper studies noisy low-rank matrix completion: given partial and corrupted entries of a large low-rank matrix, the goal is to estimate the underlying matrix faithfully and efficiently. Arguably
Matrix Completion With Column Manipulation: Near-Optimal Sample-Robustness-Rank Tradeoffs
TLDR
An efficient algorithm is developed based on a combination of a trimming procedure and a convex program that minimizes the nuclear norm and the ℓ1,2 norm and shows that given a vanishing fraction of observed entries, it is nevertheless possible to complete the underlying matrix even when the number of corrupted columns grows.
Fast and Near-Optimal Matrix Completion via Randomized Basis Pursuit
TLDR
A randomized basis pursuit algorithm is proposed that will provide an exact reconstruction of the input matrix in strongly polynomial time if the matrix entries are rational and can reconstruct a larger class of matrices by inspecting a significantly smaller number of the entries.
A Geometric Approach to Low-Rank Matrix Completion
TLDR
This work considers an optimization procedure that searches for a column (or row) space that is geometrically consistent with the partial observations that preclude the existence of local minimizers and establish strong performance guarantees, for special completion scenarios, which do not require matrix incoherence and hold with probability one for arbitrary matrix size.
Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements
  • E. Candès, Y. Plan
  • Mathematics, Computer Science
    IEEE Transactions on Information Theory
  • 2011
TLDR
It is shown that properly constrained nuclear-norm minimization stably recovers a low-rank matrix from a constant number of noisy measurements per degree of freedom; this seems to be the first result of this nature.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 33 REFERENCES
A Singular Value Thresholding Algorithm for Matrix Completion
TLDR
This paper develops a simple first-order and easy-to-implement algorithm that is extremely efficient at addressing problems in which the optimal solution has low rank, and develops a framework in which one can understand these algorithms in terms of well-known Lagrange multiplier algorithms.
Uniqueness of Low-Rank Matrix Completion by Rigidity Theory
TLDR
It is observed that basic ideas and tools of rigidity theory can be adapted to determine uniqueness of low-rank matrix completion, where inner products play the role that distances play in Rigidity theory.
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
TLDR
It is shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space.
Exact matrix completion via convex optimization
TLDR
It is demonstrated that in very general settings, one can perfectly recover all of the missing entries from most sufficiently large subsets by solving a convex programming problem that finds the matrix with the minimum nuclear norm agreeing with the observed entries.
Fixed point and Bregman iterative methods for matrix rank minimization
TLDR
A very fast, robust and powerful algorithm, which the authors call FPCA (Fixed Point Continuation with Approximate SVD), that can solve very large matrix rank minimization problems and proves convergence of the first of these algorithms.
Matrix Completion From a Few Entries
TLDR
An efficient algorithm is described, which is called OptSpace, that reconstructs M from |E| = O(rn) observed entries with relative root mean square error 1/2 RMSE ¿ C(¿) (nr/|E|)1/2 with probability larger than 1 - 1/n3.
Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices
TLDR
A heuristic for minimizing the rank of a positive semidefinite matrix over a convex set using the logarithm of the determinant as a smooth approximation for rank is presented and readily extended to handle general matrices.
On the rank minimization problem over a positive semidefinite linear matrix inequality
TLDR
This paper considers the problem of minimizing the rank of a positive semidefinite matrix, subject to the constraint that an affine transformation of it is also positive semidfinite, and employs ideas from the ordered linear complementarity theory and the notion of the least element in a vector lattice.
Recovering the missing components in a large noisy low-rank matrix: application to SFM
  • Pei Chen, D. Suter
  • Mathematics, Medicine
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2004
TLDR
This paper provides a method to recover the most reliable imputation, in terms of deciding when the inclusion of extra rows or columns, containing significant numbers of missing entries, is likely to lead to poor recovery of the missing parts.
Low-rank matrix factorization with attributes
TLDR
This work develops a new collaborative filtering method that combines both previously known users' preferences, as well as product/user attributes, i.e. standard CF, to predict a given user's interest in a particular product.
...
1
2
3
4
...