• Corpus ID: 196831612

# Fast, Provably convergent IRLS Algorithm for p-norm Linear Regression

@inproceedings{Adil2019FastPC,
title={Fast, Provably convergent IRLS Algorithm for p-norm Linear Regression},
author={Deeksha Adil and Richard Peng and Sushant Sachdeva},
booktitle={NeurIPS},
year={2019}
}
• Published in NeurIPS 16 July 2019
• Computer Science
Linear regression in $\ell_p$-norm is a canonical optimization problem that arises in several applications, including sparse recovery, semi-supervised learning, and signal processing. Generic convex optimization algorithms for solving $\ell_p$-regression are slow in practice. Iteratively Reweighted Least Squares (IRLS) is an easy to implement family of algorithms for solving these problems that has been studied for over 50 years. However, these algorithms often diverge for p > 3, and since the…

## Figures from this paper

### PROMPT: Parallel Iterative Algorithm for $\ell_{p}$ norm linear regression via Majorization Minimization with an application to semi-supervised graph learning

• Computer Science
• 2021
It is proved that the proposed algorithm is monotonic and converges to the optimal solution of the problem for any value of p and also performs better than the state-of-the-art algorithms in terms of speed of convergence.

### Iteratively Reweighted Least Squares for Basis Pursuit with Global Linear Convergence Rate

• Computer Science
NeurIPS
• 2021
It is proved that a variant of IRLS converges with a global linear rate to a sparse solution, i.e., with a linear error decrease occurring immediately from any initialization, if the measurements fulfill the usual null space property assumption.

### Iteratively Reweighted Least Squares for 𝓁1-minimization with Global Linear Convergence Rate

• Computer Science
ArXiv
• 2020
It is proved that IRLS for l1-minimization converges to a sparse solution with a global linear rate, and theory is supported by numerical experiments indicating that the linear rate essentially captures the correct dimension dependence.

### Global Linear and Local Superlinear Convergence of IRLS for Non-Smooth Robust Regression

• Mathematics
• 2022
We advance both the theory and practice of robust (cid:96) p -quasinorm regression for p ∈ (0 , 1] by using novel variants of iteratively reweighted least-squares (IRLS) to solve the underlying

### Fast Regression for Structured Inputs

• Computer Science, Mathematics
ICLR
• 2022
This work gives an algorithm for p regression on Vandermonde matrices that runs in time O(n log n+(dp) ·polylogn), where ω is the exponent of matrix multiplication.

### Improved iteration complexities for overconstrained p-norm regression

• Computer Science, Mathematics
STOC
• 2022
Improved iteration complexities for solving ℓp regression are obtained and an O(d1/3є−2/3) iteration complexity for approximateℓ∞ regression is obtained.

### Highly smooth minimization of non-smooth problems

The work goes beyond the previous O(ε−1) barrier in terms of ε dependence, and in the case of ∞ regression and `1-SVM, overall improvements for some parameter settings in the moderate-accuracy regime are established.

### Faster p-norm minimizing flows, via smoothed q-norm problems

• Computer Science, Mathematics
SODA
• 2020
The key technical contribution is to show that smoothed $\ell_p$-norm problems introduced by Adil et al., are interreducible for different values of $p, the first high-accuracy algorithm for computing weighted$\ell_{p}-norm minimizing flows that runs in time.

### Complementary Composite Minimization, Small Gradients in General Norms, and Applications to Regression Problems

• Computer Science, Mathematics
ArXiv
• 2021
This work introduces a new algorithmic framework for complementary composite minimization, where the objective function decouples into a (weakly) smooth and a uniformly convex term, and proves that the algorithms resulting from this framework are near-optimal in most of the standard optimization settings.

### Algorithms for $\ell_p$-based semi-supervised learning on graphs

• Computer Science, Mathematics
• 2019

### Algorithms for Lipschitz Learning on Graphs

• Mathematics, Computer Science
COLT
• 2015
This work develops fast algorithms for solving regression problems on graphs where one is given the value of a function at some vertices, and must find its smoothest possible extension to all vertices using the absolutely minimal Lipschitz extension.