# Solving Sparse Linear Systems Faster than Matrix Multiplication

@inproceedings{Peng2021SolvingSL, title={Solving Sparse Linear Systems Faster than Matrix Multiplication}, author={Richard Peng and Santosh S. Vempala}, booktitle={SODA}, year={2021} }

Can linear systems be solved faster than matrix multiplication? While there has been remarkable progress for the special cases of graph structured linear systems, in the general setting, the bit complexity of solving an $n \times n$ linear system $Ax=b$ is $\tilde{O}(n^\omega)$, where $\omega < 2.372864$ is the matrix multiplication exponent. Improving on this has been an open problem even for sparse linear systems with poly$(n)$ condition number.
In this paper, we present an algorithm that…

## 16 Citations

Sparse Regression Faster than dω

- Computer ScienceArXiv
- 2021

Algorithms for 2-norm regression, as well as p- norm regression, can be improved to go below the matrix multiplication threshold for sufficiently sparse matrices for tall-and-thin row-sparse matrices.

Faster Sparse Matrix Inversion and Rank Computation in Finite Fields

- MathematicsITCS
- 2022

We improve the current best running time value to invert sparse matrices over finite fields, lowering it to an expected O ( n2.2131 ) time for the current values of fast rectangular matrix…

Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation Complete Gadgets

- MathematicsArXiv
- 2022

We study linear equations in combinatorial Laplacians of k-dimensional simplicial complexes (k-complexes), a natural generalization of graph Laplacians. Combinatorial Laplacians play a crucial role…

SGN: Sparse Gauss-Newton for Accelerated Sensitivity Analysis

- Computer ScienceACM Trans. Graph.
- 2022

This work shows how the dense Gauss-Newton Hessian can be transformed into an equivalent sparse matrix that can be assembled and factorized much more efficiently, leading to drastically reduced computation times for many inverse problems, which is demonstrated on a diverse set of examples.

Towards Neural Sparse Linear Solvers

- Computer ScienceArXiv
- 2022

This study proposes neural sparse linear solvers, a deep learning framework to learn approximate solvers for sparse symmetric linear systems, and shows a general approach to tackle problems involving sparse asymmetric matrices using graph neural networks.

A faster algorithm for solving general LPs

- Computer ScienceSTOC
- 2021

The running time of the LP solver is reduced to O*(nω +n2.5−α/2 + n2+1/18) where ω and α are the fast matrix multiplication exponent and its dual, under the common belief that ω ≈ 2 and α ≈ 1.

A nearly-linear time algorithm for linear programs with small treewidth: a multiscale representation of robust central path

- Computer ScienceSTOC
- 2021

This paper shows how to solve a linear program of the form minAx=b in time O(n · τ2 log(1/ε), and obtains the first IPM with o(rank(A))) time per iteration when the treewidth is small, and a novel representation of the solution under a multiscale basis similar to the wavelet basis.

Age-Aware Stochastic Hybrid Systems: Stability, Solutions, and Applications

- Computer ScienceArXiv
- 2021

This paper analyzes status update systems modeled through the Stochastic Hybrid Systems (SHSs) tool and provides a framework to establish the Lagrange stability and positive recurrence of these processes.

Efficient Use of Quantum Linear System Algorithms in Interior Point Methods for Linear Optimization

- Computer Science
- 2021

An Inexact Infeasible QIPM is developed to solve linear optimization problems and how to get an exact solution by Iterative Refinement without excessive time of QLSAs is discussed.

Faster $p$-Norm Regression Using Sparsity

- Computer Science
- 2021

It is shown that recent progress on fast sparse linear solvers can be leveraged to obtain faster than matrix-multiplication algorithms for any p > 1, i.e., in time Õ(pn) for some θ < ω, the matrix multiplication constant.

## References

SHOWING 1-10 OF 76 REFERENCES

Nearly-Linear Time Algorithms for Preconditioning and Solving Symmetric, Diagonally Dominant Linear Systems

- Computer Science, MathematicsSIAM J. Matrix Anal. Appl.
- 2014

A randomized algorithm is presented that on input a symmetric, weakly diagonally dominant matrix A with nonpositive off-diagonal entries and an n-vector produces an x such that $\tilde{x} - A^{\dagger} {b} \leq \epsilon$ in expected time.

Superfast and Stable Structured Solvers for Toeplitz Least Squares via Randomized Sampling

- Mathematics, Computer ScienceSIAM J. Matrix Anal. Appl.
- 2014

This work generalizes standard hierarchically semiseparable (HSS) matrix representations to rectangular ones, and constructs a rectangular HSS approximation to $\mathcal{C}$ in nearly linear complexity with randomized sampling and fast multiplications of $\ mathcal{ C}$ with vectors.

Faster inversion and other black box matrix computations using efficient block projections

- Mathematics, Computer ScienceISSAC '07
- 2007

The correctness of the algorithm to find rational solutions for sparse systems of linear equations is established by proving the existence of efficient block projections for arbitrary non-singular matrices over sufficiently large fields by incorporating them into existing black-box matrix algorithms to derive improved bounds for the cost of several matrix problems.

Relative-Error CUR Matrix Decompositions

- Computer Science, MathematicsSIAM J. Matrix Anal. Appl.
- 2008

These two algorithms are the first polynomial time algorithms for such low-rank matrix approximations that come with relative-error guarantees; previously, in some cases, it was not even known whether such matrix decompositions exist.

Speeding-up linear programming using fast matrix multiplication

- Computer Science30th Annual Symposium on Foundations of Computer Science
- 1989

An algorithm for solving linear programming problems that requires O((m+n)/sup 1.5/nL) arithmetic operations in the worst case is presented, which improves on the best known time complexity for linear programming by about square root n.

Solving Directed Laplacian Systems in Nearly-Linear Time through Sparse LU Factorizations

- Computer Science, Mathematics2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS)
- 2018

It is shown that Eulerian Laplacians (and therefore the LaPLacians of all strongly connected directed graphs) have sparse approximate LU-factorizations, and it is proved that once constructed they yield nearly-linear time algorithms for solving directed LaplACian systems.

A survey of direct methods for sparse linear systems

- Computer ScienceActa Numerica
- 2016

The goal of this survey article is to impart a working knowledge of the underlying theory and practice of sparse direct methods for solving linear systems and least-squares problems, and to provide an overview of the algorithms, data structures, and software available to solve these problems.

Sparse random matrices have simple spectrum

- Mathematics, Computer Science
- 2018

The proof is slightly modified to show that the adjacency matrix of a sparse Erd\H{o}s-R\'enyi graph has simple spectrum for $n^{-1+\delta } p \leq p 1- n^{- 1+ \delta}$.

Stability of the Lanczos Method for Matrix Function Approximation

- Computer Science, MathematicsSODA
- 2018

This paper proves that finite precision Lanczos essentially matches the exact arithmetic guarantee if computations use roughly $\log(nC\|A\|)$ bits of precision, and raises the question of if convergence in less than $poly(\kappa)$ iterations can be expected in finite precision, even for matrices with clustered, skewed, or otherwise favorable eigenvalue distributions.

On Matrices With Displacement Structure: Generalized Operators and Faster Algorithms

- Mathematics, Computer ScienceSIAM J. Matrix Anal. Appl.
- 2017

This paper generalizes classical displacement operators, based on block diagonal matrices with companion diagonal blocks, and designs fast algorithms to perform the task above for this extended class of struc-tured matrices, and obtains faster Las Vegas algorithms for structured inversion and linear system solving.