Corpus ID: 231802094

On the computational and statistical complexity of over-parameterized matrix sensing

@article{Zhuo2021OnTC,
  title={On the computational and statistical complexity of over-parameterized matrix sensing},
  author={Jiacheng Zhuo and Jeongyeol Kwon and Nhat Ho and Constantine Caramanis},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.02756}
}
We consider solving the low rank matrix sensing problem with Factorized Gradient Descend (FGD) method when the true rank is unknown and over-specified, which we refer to as over-parameterized matrix sensing. If the ground truth signal X∗ ∈ Rd∗d is of rank r, but we try to recover it using FF> where F ∈ Rd∗k and k > r, the existing statistical analysis falls short, due to a flat local curvature of the loss function around the global maxima. By decomposing the factorized matrix F into separate… Expand

Figures from this paper

Rank Overspecified Robust Matrix Recovery: Subgradient Method and Exact Recovery
TLDR
The robust recovery of a low-rank matrix from sparsely and grossly corrupted Gaussian measurements, with no prior knowledge on the intrinsic rank, is studied and it is shown that under a regularity condition on the sensing matrices and corruption, even with rank overspecified, the subgradient method converges to the exact low- rank solution at a sublinear rate. Expand
Sharp Global Guarantees for Nonconvex Low-Rank Matrix Recovery in the Overparameterized Regime
We prove that it is possible for nonconvex low-rank matrix recovery to contain no spurious local minima when the rank of the unknown ground truth r < r is strictly less than the search rank r, andExpand
A Farewell to the Bias-Variance Tradeoff? An Overview of the Theory of Overparameterized Machine Learning
TLDR
This paper provides a succinct overview of this emerging theory of overparameterized ML (henceforth abbreviated as TOPML) that explains these recent findings through a statistical signal processing perspective and emphasizes the unique aspects that define the TOPML research area as a subfield of modern ML theory. Expand
Sign-RIP: A Robust Restricted Isometry Property for Low-rank Matrix Recovery
  • Jianhao Ma, S. Fattahi
  • Computer Science, Mathematics
  • 2021
TLDR
This work proposes a robust restricted isometry property, called Sign-RIP, and shows its broad applications in robust low-rank matrix recovery, and demonstrates the uniform convergence of the subdifferentials of the robust matrix recovery with nonsmooth loss function, even at the presence of arbitrarily dense and arbitrarily large outliers. Expand

References

SHOWING 1-10 OF 33 REFERENCES
Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements
  • E. Candès, Y. Plan
  • Mathematics, Computer Science
  • IEEE Transactions on Information Theory
  • 2011
TLDR
It is shown that properly constrained nuclear-norm minimization stably recovers a low-rank matrix from a constant number of noisy measurements per degree of freedom; this seems to be the first result of this nature. Expand
Estimation of (near) low-rank matrices with noise and high-dimensional scaling
TLDR
Simulations show excellent agreement with the high-dimensional scaling of the error predicted by the theory, and illustrate their consequences for a number of specific learning models, including low-rank multivariate or multi-task regression, system identification in vector autoregressive processes, and recovery of low- rank matrices from random projections. Expand
SpaRCS: Recovering low-rank and sparse matrices from compressive measurements
TLDR
This work proposes a natural optimization problem for signal recovery under this model and develops a new greedy algorithm called SpaRCS to solve it, which inherits a number of desirable properties from the state-of-the-art CoSaMP and ADMiRA algorithms. Expand
Low-Rank Matrix Recovery From Errors and Erasures
TLDR
A new unified performance guarantee on when minimizing nuclear norm plus l1 norm succeeds in exact recovery is provided, which provides the first guarantees for 1) recovery when the authors observe a vanishing fraction of entries of a corrupted matrix, and 2) deterministic matrix completion. Expand
Understanding Alternating Minimization for Matrix Completion
  • Moritz Hardt
  • Computer Science, Mathematics
  • 2014 IEEE 55th Annual Symposium on Foundations of Computer Science
  • 2014
TLDR
A new algorithm based on alternating minimization is given that provably recovers an unknown low-rank matrix from a random subsample of its entries under a standard incoherence assumption and gives the strongest sample bounds among all subquadratic time algorithms that are aware of. Expand
Fast low-rank estimation by projected gradient descent: General statistical and algorithmic guarantees
TLDR
This work provides a simple set of conditions under which projected gradient descent, when given a suitable initialization, converges geometrically to a statistically useful solution to the factorized optimization problem with rank constraints. Expand
Nuclear norm penalization and optimal rates for noisy low rank matrix completion
This paper deals with the trace regression model where $n$ entries or linear combinations of entries of an unknown $m_1\times m_2$ matrix $A_0$ corrupted by noise are observed. We propose a newExpand
Restricted strong convexity and weighted matrix completion: Optimal bounds with noise
TLDR
The matrix completion problem under a form of row/column weighted entrywise sampling is considered, including the case of uniformentrywise sampling as a special case, and it is proved that with high probability, it satisfies a forms of restricted strong convexity with respect to weighted Frobenius norm. Expand
Low-rank matrix completion using alternating minimization
TLDR
This paper presents one of the first theoretical analyses of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing, and shows that alternating minimizations guarantees faster convergence to the true matrix, while allowing a significantly simpler analysis. Expand
Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
TLDR
It is shown that if a certain restricted isometry property holds for the linear transformation defining the constraints, the minimum-rank solution can be recovered by solving a convex optimization problem, namely, the minimization of the nuclear norm over the given affine space. Expand
...
1
2
3
4
...