# A Scalable Second Order Method for Ill-Conditioned Matrix Completion from Few Samples

@inproceedings{Kmmerle2021ASS, title={A Scalable Second Order Method for Ill-Conditioned Matrix Completion from Few Samples}, author={Christian K{\"u}mmerle and Claudio Mayrink Verdun}, booktitle={International Conference on Machine Learning}, year={2021} }

We propose an iterative algorithm for low-rank matrix completion that can be interpreted as an iteratively reweighted least squares (IRLS) algorithm, a saddle-escaping smoothing Newton method or a variable metric proximal gradient method applied to a non-convex rank surrogate. It combines the favorable data-efficiency of previous IRLS approaches with an improved scalability by several orders of magnitude. We establish the first local convergence guarantee from a minimal number of samples forβ¦Β

## 8 Citations

### Iteratively Reweighted Least Squares for Basis Pursuit with Global Linear Convergence Rate

- Computer ScienceNeurIPS
- 2021

It is proved that a variant of IRLS converges with a global linear rate to a sparse solution, i.e., with a linear error decrease occurring immediately from any initialization, if the measurements fulfill the usual null space property assumption.

### GNMR: A provable one-line algorithm for low rank matrix recovery

- Computer ScienceSIAM Journal on Mathematics of Data Science
- 2022

GNMR is presented β an extremely simple iterative algorithm for low rank matrix recovery, based on a Gauss-Newton linearization, which shows that for matrix completion with uniform sampling, GNMR performs better than several popular methods, especially when given very few observations close to the information limit.

### Accelerating SGD for Highly Ill-Conditioned Huge-Scale Online Matrix Completion

- Computer ScienceArXiv
- 2022

For a symmetric ground truth and the Root Mean Square Error (RMSE) loss, it is proved that the preconditioned SGD converges to Η« -accuracy in O (log(1 /Η« )) iterations, with a rapid linear convergence rate as if the ground truth were perfectly conditioned with ΞΊ = 1 .

### Global Linear and Local Superlinear Convergence of IRLS for Non-Smooth Robust Regression

- Mathematics
- 2022

We advance both the theory and practice of robust (cid:96) p -quasinorm regression for p β (0 , 1] by using novel variants of iteratively reweighted least-squares (IRLS) to solve the underlyingβ¦

### Lp Quasi-norm Minimization: Algorithm and Applications

- Computer Science
- 2023

A heuristic method for retrieving sparse approximate solutions of optimization problems via minimizing the β p quasi-norm, using a proximal gradient step to mitigate the convex projection step and hence enhance the algorithm speed while proving its convergence.

### Learning Transition Operators From Sparse Space-Time Samples

- Computer Science, MathematicsArXiv
- 2022

A suitable non-convex iterative reweighted least squares (IRLS) algorithm is developed, its quadratic local convergence is established, and it is established that spatial samples can be substituted by a comparable number of space-time samples.

### PI-NLF: A Proportional-Integral Approach for Non-negative Latent Factor Analysis

- Computer ScienceArXiv
- 2022

The feasibility of boosting the performance of a non-negative learning algorithm through an error feedback controller is unveiled, demonstrating that a PI-NLF model outperforms the state-of-the-art models in both computational efficiency and estimation accuracy for missing data of an HDI matrix.

### Missing Value Imputation With Low-Rank Matrix Completion in Single-Cell RNA-Seq Data by Considering Cell Heterogeneity

- BiologyFrontiers in Genetics
- 2022

A novel method is developed, called single cell GaussβNewton Gene expression Imputation (scGNGI), to impute the scRNA-seq expression matrices by using a low-rank matrix completion, which can better preserve gene expression variability among cells.

## References

SHOWING 1-10 OF 97 REFERENCES

### Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled Gradient Descent

- Computer ScienceJ. Mach. Learn. Res.
- 2021

This paper theoretically shows that ScaledGD achieves the best of both worlds: it converges linearly at a rate independent of the condition number of the low-rank matrix similar as alternating minimization, while maintaining the low per-iteration cost of gradient descent.

### Rank 2r Iterative Least Squares: Efficient Recovery of Ill-Conditioned Low Rank Matrices from Few Entries

- Computer ScienceSIAM J. Math. Data Sci.
- 2021

We present a new, simple and computationally efficient iterative method for low rank matrix completion. Our method is inspired by the class of factorization-type iterative algorithms, butβ¦

### Low-rank Matrix Recovery via Iteratively Reweighted Least Squares Minimization

- Computer Science, MathematicsSIAM J. Optim.
- 2011

An efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements designed for the simultaneous promotion of both a minimal nuclear norm and an approximately low-rank solution is presented.

### Guaranteed Matrix Completion via Nonconvex Factorization

- Computer Science2015 IEEE 56th Annual Symposium on Foundations of Computer Science
- 2015

This paper establishes a theoretical guarantee for the factorization based formulation to correctly recover the underlying low-rank matrix, and is the first one that provides exact recovery guarantee for many standard algorithms such as gradient descent, SGD and block coordinate gradient descent.

### Harmonic Mean Iteratively Reweighted Least Squares for low-rank matrix recovery

- Computer Science2017 International Conference on Sampling Theory and Applications (SampTA)
- 2017

The strategy HM-IRLS uses to optimize a non-convex Schatten-p penalization to promote low-rankness carries three major strengths, in particular for the matrix completion setting.

### Solving a low-rank factorization model for matrix completion by a nonlinear successive over-relaxation algorithm

- Computer ScienceMath. Program. Comput.
- 2012

A low-rank factorization model is proposed and a nonlinear successive over-relaxation (SOR) algorithm is constructed that only requires solving a linear least squares problem per iteration to improve the capacity of solving large-scale problems.

### Iterative reweighted algorithms for matrix rank minimization

- Computer ScienceJ. Mach. Learn. Res.
- 2012

This paper proposes a family of Iterative Reweighted Least Squares algorithms IRLS-p, and gives theoretical guarantees similar to those for nuclear norm minimization, that is, recovery of low-rank matrices under certain assumptions on the operator defining the constraints.

### Nonconvex Optimization Meets Low-Rank Matrix Factorization: An Overview

- Computer ScienceIEEE Transactions on Signal Processing
- 2019

This tutorial-style overview highlights the important role of statistical models in enabling efficient nonconvex optimization with performance guarantees and reviews two contrasting approaches: two-stage algorithms, which consist of a tailored initialization step followed by successive refinement; and global landscape analysis and initialization-free algorithms.

### Improved Iteratively Reweighted Least Squares for Unconstrained Smoothed νq Minimization

- Computer ScienceSIAM J. Numer. Anal.
- 2013

This paper starts with a preliminary yet novel analysis for unconstrained $\ell_q$ minimization, which includes convergence, error bound, and local convergence behavior, and extends the algorithm and analysis to the recovery of low-rank matrices.