• Corpus ID: 235352841

# A Scalable Second Order Method for Ill-Conditioned Matrix Completion from Few Samples

@inproceedings{Kmmerle2021ASS,
title={A Scalable Second Order Method for Ill-Conditioned Matrix Completion from Few Samples},
author={Christian K{\"u}mmerle and Claudio Mayrink Verdun},
booktitle={International Conference on Machine Learning},
year={2021}
}
• Published in
International Conference on…
3 June 2021
• Computer Science
We propose an iterative algorithm for low-rank matrix completion that can be interpreted as an iteratively reweighted least squares (IRLS) algorithm, a saddle-escaping smoothing Newton method or a variable metric proximal gradient method applied to a non-convex rank surrogate. It combines the favorable data-efficiency of previous IRLS approaches with an improved scalability by several orders of magnitude. We establish the first local convergence guarantee from a minimal number of samples for…
8 Citations

## Figures from this paper

• Computer Science
NeurIPS
• 2021
It is proved that a variant of IRLS converges with a global linear rate to a sparse solution, i.e., with a linear error decrease occurring immediately from any initialization, if the measurements fulfill the usual null space property assumption.
• Computer Science
SIAM Journal on Mathematics of Data Science
• 2022
GNMR is presented — an extremely simple iterative algorithm for low rank matrix recovery, based on a Gauss-Newton linearization, which shows that for matrix completion with uniform sampling, GNMR performs better than several popular methods, especially when given very few observations close to the information limit.
• Computer Science
ArXiv
• 2022
For a symmetric ground truth and the Root Mean Square Error (RMSE) loss, it is proved that the preconditioned SGD converges to ǫ -accuracy in O (log(1 /ǫ )) iterations, with a rapid linear convergence rate as if the ground truth were perfectly conditioned with κ = 1 .
• Mathematics
• 2022
We advance both the theory and practice of robust (cid:96) p -quasinorm regression for p ∈ (0 , 1] by using novel variants of iteratively reweighted least-squares (IRLS) to solve the underlying
• Computer Science
• 2023
A heuristic method for retrieving sparse approximate solutions of optimization problems via minimizing the ℓ p quasi-norm, using a proximal gradient step to mitigate the convex projection step and hence enhance the algorithm speed while proving its convergence.
• Computer Science, Mathematics
ArXiv
• 2022
A suitable non-convex iterative reweighted least squares (IRLS) algorithm is developed, its quadratic local convergence is established, and it is established that spatial samples can be substituted by a comparable number of space-time samples.
• Computer Science
ArXiv
• 2022
The feasibility of boosting the performance of a non-negative learning algorithm through an error feedback controller is unveiled, demonstrating that a PI-NLF model outperforms the state-of-the-art models in both computational efficiency and estimation accuracy for missing data of an HDI matrix.
• Biology
Frontiers in Genetics
• 2022
A novel method is developed, called single cell Gauss–Newton Gene expression Imputation (scGNGI), to impute the scRNA-seq expression matrices by using a low-rank matrix completion, which can better preserve gene expression variability among cells.

## References

SHOWING 1-10 OF 97 REFERENCES

• Computer Science
J. Mach. Learn. Res.
• 2021
This paper theoretically shows that ScaledGD achieves the best of both worlds: it converges linearly at a rate independent of the condition number of the low-rank matrix similar as alternating minimization, while maintaining the low per-iteration cost of gradient descent.
• Computer Science
SIAM J. Math. Data Sci.
• 2021
We present a new, simple and computationally efficient iterative method for low rank matrix completion. Our method is inspired by the class of factorization-type iterative algorithms, but
• Computer Science, Mathematics
SIAM J. Optim.
• 2011
An efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements designed for the simultaneous promotion of both a minimal nuclear norm and an approximately low-rank solution is presented.
• Computer Science
2015 IEEE 56th Annual Symposium on Foundations of Computer Science
• 2015
This paper establishes a theoretical guarantee for the factorization based formulation to correctly recover the underlying low-rank matrix, and is the first one that provides exact recovery guarantee for many standard algorithms such as gradient descent, SGD and block coordinate gradient descent.
• Computer Science
2017 International Conference on Sampling Theory and Applications (SampTA)
• 2017
The strategy HM-IRLS uses to optimize a non-convex Schatten-p penalization to promote low-rankness carries three major strengths, in particular for the matrix completion setting.
• Computer Science
Math. Program. Comput.
• 2012
A low-rank factorization model is proposed and a nonlinear successive over-relaxation (SOR) algorithm is constructed that only requires solving a linear least squares problem per iteration to improve the capacity of solving large-scale problems.
• Computer Science
J. Mach. Learn. Res.
• 2012
This paper proposes a family of Iterative Reweighted Least Squares algorithms IRLS-p, and gives theoretical guarantees similar to those for nuclear norm minimization, that is, recovery of low-rank matrices under certain assumptions on the operator defining the constraints.
• Computer Science
IEEE Transactions on Signal Processing
• 2019
This tutorial-style overview highlights the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees and reviews two contrasting approaches: two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and global landscape analysis and initialization-free algorithms.
• Computer Science
SIAM J. Numer. Anal.
• 2013
This paper starts with a preliminary yet novel analysis for unconstrained $\ell_q$ minimization, which includes convergence, error bound, and local convergence behavior, and extends the algorithm and analysis to the recovery of low-rank matrices.