Leave-One-Out Approach for Matrix Completion: Primal and Dual Analysis

@article{Ding2020LeaveOneOutAF,
  title={Leave-One-Out Approach for Matrix Completion: Primal and Dual Analysis},
  author={Lijun Ding and Yudong Chen},
  journal={IEEE Transactions on Information Theory},
  year={2020},
  volume={66},
  pages={7274-7301}
}
In this paper, we introduce a powerful technique based on Leave-One-Out analysis to the study of low-rank matrix completion problems. Using this technique, we develop a general approach for obtaining fine-grained, <italic>entrywise</italic> bounds for iterative stochastic procedures in the presence of probabilistic dependency. We demonstrate the power of this approach in analyzing two of the most important algorithms for matrix completion: (i) the non-convex approach based on Projected Gradient… 

Tables from this paper

Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization

It is demonstrated that when the rank and the condition number of the unknown matrix are bounded by a constant, the convex programming approach achieves near-optimal estimation errors - in terms of the Euclidean loss, the entrywise loss, and the spectral norm loss - for a wide range of noise levels.

Fine-grained Generalization Analysis of Inductive Matrix Completion

The (smoothed) adjusted trace-norm minimization strategy is introduced, an inductive analogue of the weighted trace norm, for which it is confirmed that the strategy outperforms standard inductive matrix completion on various synthetic datasets and real problems, justifying its place as an important tool in the arsenal of methods for matrix completion using side information.

Noisy Matrix Completion : Understanding the Stability of Convex Relaxation via Nonconvex Optimization

It is demonstrated that the convex programming approach achieves near-optimal estimation errors — in terms of the Euclidean loss, the entrywise loss, and the spectral norm loss — for a wide range of noise levels by bridging convex relaxation with the nonconvex Burer–Monteiro approach.

Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview

This tutorial-style overview highlights the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees and reviews two contrasting approaches: two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and global landscape analysis and initialization-free algorithms.

Nonconvex Rectangular Matrix Completion via Gradient Descent Without ℓ₂,∞ Regularization

The analysis of the vanilla gradient descent for positive semidefinite matrix completion in the literature is extended to the rectangular case, and the required sampling rate is improved.

Nonconvex Matrix Factorization From Rank-One Measurements

We consider the problem of recovering low-rank matrices from random rank-one measurements, which spans numerous applications including covariance sketching, phase retrieval, quantum state tomography,

On the Convex Geometry of Blind Deconvolution and Matrix Completion

This paper takes a novel, more geometric viewpoint to analyze both the matrix completion and the blind deconvolution scenario and finds that for both applications the dimension factors in the noise bounds are not an artifact of the proof, but the problems are intrinsically badly conditioned.

Nonconvex Low-Rank Symmetric Tensor Completion from Noisy Data

The proposed nonconvex algorithm faithfully completes the tensor and retrieves all individual tensor factors within nearly linear time, while at the same time enjoying near-optimal statistical guarantees (i.e. minimal sample complexity and optimal estimation accuracy).

Lecture notes on non-convex algorithms for low-rank matrix recovery

The goal of these notes is to review recent progress in this direction for the class of so-called “non-convex algorithms”, with a particular focus on the proof techniques.

WARPd: A Linearly Convergent First-Order Primal-Dual Algorithm for Inverse Problems with Approximate Sharpness Conditions

This work provides a first-order method: weighted, accelerated, and restarted primal-dual (WARPd), based on primal- DUal iterations and a novel restart-reweight scheme, which achieves stable linear convergence to the desired vector under a generic approximate sharpness condition.

References

SHOWING 1-10 OF 42 REFERENCES

Guaranteed Matrix Completion via Nonconvex Factorization

  • Ruoyu SunZ. Luo
  • Computer Science
    2015 IEEE 56th Annual Symposium on Foundations of Computer Science
  • 2015
This paper establishes a theoretical guarantee for the factorization based formulation to correctly recover the underlying low-rank matrix, and is the first one that provides exact recovery guarantee for many standard algorithms such as gradient descent, SGD and block coordinate gradient descent.

Low-rank matrix completion using alternating minimization

This paper presents one of the first theoretical analyses of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing, and shows that alternating minimizations guarantees faster convergence to the true matrix, while allowing a significantly simpler analysis.

Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees

This work provides a simple set of conditions under which projected gradient descent, when given a suitable initialization, converges geometrically to a statistically useful solution to the factorized optimization problem with rank constraints.

Fast Exact Matrix Completion with Finite Samples

This paper presents a fast iterative algorithm that solves the matrix completion problem by observing $O(nr^5 \log^3 n)$ entries, which is independent of the condition number and the desired accuracy, and is the first near linear time algorithm for exact matrix completion with finite sample complexity.

The Power of Convex Relaxation: Near-Optimal Matrix Completion

This paper shows that, under certain incoherence assumptions on the singular vectors of the matrix, recovery is possible by solving a convenient convex program as soon as the number of entries is on the order of the information theoretic limit (up to logarithmic factors).

Incoherence-Optimal Matrix Completion

  • Yudong Chen
  • Computer Science
    IEEE Transactions on Information Theory
  • 2015
The results show that the standard and joint incoherence conditions are associated, respectively, with the information (statistical) and computational aspects of the matrix decomposition problem.

Nonconvex Rectangular Matrix Completion via Gradient Descent Without ℓ₂,∞ Regularization

The analysis of the vanilla gradient descent for positive semidefinite matrix completion in the literature is extended to the rectangular case, and the required sampling rate is improved.

Matrix Completion and Related Problems via Strong Duality

This work proposes a novel analytical framework and shows that under certain dual conditions, the optimal solution of the matrix factorization program is the same as its bi-dual and thus the global optimality of the non-convex program can be achieved by solving its dual which is convex.

Exact Matrix Completion via Convex Optimization

It is proved that one can perfectly recover most low-rank matrices from what appears to be an incomplete set of entries, and that objects other than signals and images can be perfectly reconstructed from very limited information.

A Simpler Approach to Matrix Completion

  • B. Recht
  • Computer Science
    J. Mach. Learn. Res.
  • 2011
This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low-rank matrix by minimizing the nuclear norm of the hidden matrix subject to agreement with the provided entries.