• Corpus ID: 196831612

Fast, Provably convergent IRLS Algorithm for p-norm Linear Regression

@inproceedings{Adil2019FastPC,
  title={Fast, Provably convergent IRLS Algorithm for p-norm Linear Regression},
  author={Deeksha Adil and Richard Peng and Sushant Sachdeva},
  booktitle={NeurIPS},
  year={2019}
}
Linear regression in $\ell_p$-norm is a canonical optimization problem that arises in several applications, including sparse recovery, semi-supervised learning, and signal processing. Generic convex optimization algorithms for solving $\ell_p$-regression are slow in practice. Iteratively Reweighted Least Squares (IRLS) is an easy to implement family of algorithms for solving these problems that has been studied for over 50 years. However, these algorithms often diverge for p > 3, and since the… 

Figures from this paper

PROMPT: Parallel Iterative Algorithm for $\ell_{p}$ norm linear regression via Majorization Minimization with an application to semi-supervised graph learning

TLDR
It is proved that the proposed algorithm is monotonic and converges to the optimal solution of the problem for any value of p and also performs better than the state-of-the-art algorithms in terms of speed of convergence.

Iteratively Reweighted Least Squares for Basis Pursuit with Global Linear Convergence Rate

TLDR
It is proved that a variant of IRLS converges with a global linear rate to a sparse solution, i.e., with a linear error decrease occurring immediately from any initialization, if the measurements fulfill the usual null space property assumption.

Iteratively Reweighted Least Squares for 𝓁1-minimization with Global Linear Convergence Rate

TLDR
It is proved that IRLS for l1-minimization converges to a sparse solution with a global linear rate, and theory is supported by numerical experiments indicating that the linear rate essentially captures the correct dimension dependence.

Global Linear and Local Superlinear Convergence of IRLS for Non-Smooth Robust Regression

We advance both the theory and practice of robust (cid:96) p -quasinorm regression for p ∈ (0 , 1] by using novel variants of iteratively reweighted least-squares (IRLS) to solve the underlying

Fast Regression for Structured Inputs

TLDR
This work gives an algorithm for `p regression on Vandermonde matrices that runs in time O(n log n+(dp) ·polylogn), where ω is the exponent of matrix multiplication.

Improved iteration complexities for overconstrained p-norm regression

TLDR
Improved iteration complexities for solving ℓp regression are obtained and an O(d1/3є−2/3) iteration complexity for approximateℓ∞ regression is obtained.

Highly smooth minimization of non-smooth problems

TLDR
The work goes beyond the previous O(ε−1) barrier in terms of ε dependence, and in the case of `∞ regression and `1-SVM, overall improvements for some parameter settings in the moderate-accuracy regime are established.

Faster p-norm minimizing flows, via smoothed q-norm problems

TLDR
The key technical contribution is to show that smoothed $\ell_p$-norm problems introduced by Adil et al., are interreducible for different values of $p, the first high-accuracy algorithm for computing weighted $\ell_{p}-norm minimizing flows that runs in time.

Complementary Composite Minimization, Small Gradients in General Norms, and Applications to Regression Problems

TLDR
This work introduces a new algorithmic framework for complementary composite minimization, where the objective function decouples into a (weakly) smooth and a uniformly convex term, and proves that the algorithms resulting from this framework are near-optimal in most of the standard optimization settings.

Algorithms for $\ell_p$-based semi-supervised learning on graphs

TLDR
Several efficient and scalable algorithms for solving variational and game-theoretic equations on weighted graphs for p>2 are presented, and numerical results on synthetic data and on classification and regression problems that illustrate the effectiveness of the $p-Laplacian for semi-supervised learning with few labels.

References

SHOWING 1-10 OF 56 REFERENCES

Iterative Refinement for ℓp-norm Regression

TLDR
Improved algorithms for the lp-regression problem, minx ‖x‖p such that Ax = b, such that p ∈ (1, 2) ∪ (2,∞) are given, which can be combined with nearly-linear time solvers for linear systems in graph Laplacians to give minimum l p-norm flow / voltage solutions to 1/poly(n) accuracy on an undirected graph.

An homotopy method for lp regression provably beyond self-concordance and in input-sparsity time

TLDR
It is proved that any symmetric self-concordant barrier on the ℓpn unit ball has self- Concordance parameter Ω(n), and a randomized algorithm solving such problems in input sparsity time is proposed, i.e., Õp(N + poly(d)) where N is the size of the input and d is the number of variables.

Flows in almost linear time via adaptive preconditioning

TLDR
This work gives an alternate approach for approximating undirected max-flow, and the first almost-linear time approximations of discretizations of total variation minimization objectives.

Iteratively reweighted least squares minimization for sparse recovery

TLDR
It is proved that when Φ satisfies the RIP conditions, the sequence x(n) converges for all y, regardless of whether Φ−1(y) contains a sparse vector.

Iteratively Re-weighted Least Squares minimization: Proof of faster than linear rate for sparse recovery

TLDR
A specific recipe for updating weights is given that avoids technical shortcomings in other approaches, and for which it is shown that whenever the solution at a given iteration is sufficiently close to the limit, then the remaining steps of the algorithm converge exponentially fast.

Faster p-norm minimizing flows, via smoothed q-norm problems

TLDR
The key technical contribution is to show that smoothed $\ell_p$-norm problems introduced by Adil et al., are interreducible for different values of $p, the first high-accuracy algorithm for computing weighted $\ell_{p}-norm minimizing flows that runs in time.

Iterative Reweighted Least Squares ∗

Describes a powerful optimization algorithm which iteratively solves a weighted least squares approximation problem in order to solve an L_p approximation problem. 1 Approximation Methods of

Low-rank Matrix Recovery via Iteratively Reweighted Least Squares Minimization

TLDR
An efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements designed for the simultaneous promotion of both a minimal nuclear norm and an approximately low-rank solution is presented.

Asymptotic behavior of \(\ell_p\)-based Laplacian regularization in semi-supervised learning

TLDR
It is shown that the effect of the underlying density vanishes monotonically with $p, such that in the limit $p = \infty$, corresponding to the so-called Absolutely Minimal Lipschitz Extension, the estimate $\hat{f}$ is independent of the distribution $P$.

Algorithms for Lipschitz Learning on Graphs

TLDR
This work develops fast algorithms for solving regression problems on graphs where one is given the value of a function at some vertices, and must find its smoothest possible extension to all vertices using the absolutely minimal Lipschitz extension.
...