Solving Sparse Linear Systems Faster than Matrix Multiplication

@inproceedings{Peng2021SolvingSL,
  title={Solving Sparse Linear Systems Faster than Matrix Multiplication},
  author={Richard Peng and Santosh S. Vempala},
  booktitle={SODA},
  year={2021}
}
Can linear systems be solved faster than matrix multiplication? While there has been remarkable progress for the special cases of graph structured linear systems, in the general setting, the bit complexity of solving an $n \times n$ linear system $Ax=b$ is $\tilde{O}(n^\omega)$, where $\omega < 2.372864$ is the matrix multiplication exponent. Improving on this has been an open problem even for sparse linear systems with poly$(n)$ condition number. In this paper, we present an algorithm that… 

Figures from this paper

A nearly-linear time algorithm for linear programs with small treewidth: a multiscale representation of robust central path
TLDR
This paper shows how to solve a linear program of the form minAx=b in time O(n · τ2 log(1/ε), and obtains the first IPM with o(rank(A))) time per iteration when the treewidth is small, and a novel representation of the solution under a multiscale basis similar to the wavelet basis.
Sparse Regression Faster than dω
TLDR
Algorithms for 2-norm regression, as well as p- norm regression, can be improved to go below the matrix multiplication threshold for sufficiently sparse matrices for tall-and-thin row-sparse matrices.
Towards Neural Sparse Linear Solvers
TLDR
This study proposes neural sparse linear solvers, a deep learning framework to learn approximate solvers for sparse symmetric linear systems, and shows a general approach to tackle problems involving sparse asymmetric matrices using graph neural networks.
Hardness Results for Laplacians of Simplicial Complexes via Sparse-Linear Equation Complete Gadgets
We study linear equations in combinatorial Laplacians of k-dimensional simplicial complexes (k-complexes), a natural generalization of graph Laplacians. Combinatorial Laplacians play a crucial role
Matrix anti-concentration inequalities with applications
  • Zipei Nie
  • Computer Science, Mathematics
    STOC
  • 2022
TLDR
Two matrix anti-concentration inequalities are established, which lower bound the minimum singular values of the sum of independent positive semidefinite self-adjoint matrices and the linear combination of independent random matrices with independent Gaussian coefficients.
Improved iteration complexities for overconstrained p-norm regression
TLDR
Improved iteration complexities for solving ℓp regression are obtained and an O(d1/3є−2/3) iteration complexity for approximateℓ∞ regression is obtained.
SGN: Sparse Gauss-Newton for Accelerated Sensitivity Analysis
TLDR
This work shows how the dense Gauss-Newton Hessian can be transformed into an equivalent sparse matrix that can be assembled and factorized much more efficiently, leading to drastically reduced computation times for many inverse problems, which is demonstrated on a diverse set of examples.
Faster Sparse Matrix Inversion and Rank Computation in Finite Fields
We improve the current best running time value to invert sparse matrices over finite fields, lowering it to an expected O ( n2.2131 ) time for the current values of fast rectangular matrix
Mean Hitting Time on Recursive Growth Tree Network
TLDR
A series of combinatorial techniques called Mapping Transformation to exactly determine the associated 〈H〉−polynomial for random walks on recursive growth tree networks that are built based on an arbitrary tree as the seed via implementing various primitive graphic operations.
Faster $p$-Norm Regression Using Sparsity
TLDR
It is shown that recent progress on fast sparse linear solvers can be leveraged to obtain faster than matrix-multiplication algorithms for any p > 1, i.e., in time Õ(pn) for some θ < ω, the matrix multiplication constant.
...
...

References

SHOWING 1-10 OF 76 REFERENCES
Nearly-Linear Time Algorithms for Preconditioning and Solving Symmetric, Diagonally Dominant Linear Systems
TLDR
A randomized algorithm is presented that on input a symmetric, weakly diagonally dominant matrix A with nonpositive off-diagonal entries and an n-vector produces an x such that $\tilde{x} - A^{\dagger} {b} \leq \epsilon$ in expected time.
Superfast and Stable Structured Solvers for Toeplitz Least Squares via Randomized Sampling
TLDR
This work generalizes standard hierarchically semiseparable (HSS) matrix representations to rectangular ones, and constructs a rectangular HSS approximation to $\mathcal{C}$ in nearly linear complexity with randomized sampling and fast multiplications of $\ mathcal{ C}$ with vectors.
Faster inversion and other black box matrix computations using efficient block projections
TLDR
The correctness of the algorithm to find rational solutions for sparse systems of linear equations is established by proving the existence of efficient block projections for arbitrary non-singular matrices over sufficiently large fields by incorporating them into existing black-box matrix algorithms to derive improved bounds for the cost of several matrix problems.
Relative-Error CUR Matrix Decompositions
TLDR
These two algorithms are the first polynomial time algorithms for such low-rank matrix approximations that come with relative-error guarantees; previously, in some cases, it was not even known whether such matrix decompositions exist.
Speeding-up linear programming using fast matrix multiplication
  • P. M. Vaidya
  • Computer Science
    30th Annual Symposium on Foundations of Computer Science
  • 1989
TLDR
An algorithm for solving linear programming problems that requires O((m+n)/sup 1.5/nL) arithmetic operations in the worst case is presented, which improves on the best known time complexity for linear programming by about square root n.
Solving Directed Laplacian Systems in Nearly-Linear Time through Sparse LU Factorizations
TLDR
It is shown that Eulerian Laplacians (and therefore the LaPLacians of all strongly connected directed graphs) have sparse approximate LU-factorizations, and it is proved that once constructed they yield nearly-linear time algorithms for solving directed LaplACian systems.
A survey of direct methods for sparse linear systems
TLDR
The goal of this survey article is to impart a working knowledge of the underlying theory and practice of sparse direct methods for solving linear systems and least-squares problems, and to provide an overview of the algorithms, data structures, and software available to solve these problems.
Sparse random matrices have simple spectrum
  • K. Luh, V. Vu
  • Mathematics, Computer Science
    Annales de l'Institut Henri Poincaré, Probabilités et Statistiques
  • 2020
TLDR
The proof is slightly modified to show that the adjacency matrix of a sparse Erd\H{o}s-R\'enyi graph has simple spectrum for $n^{-1+\delta } p \leq p 1- n^{- 1+ \delta}$.
Stability of the Lanczos Method for Matrix Function Approximation
TLDR
This paper proves that finite precision Lanczos essentially matches the exact arithmetic guarantee if computations use roughly $\log(nC\|A\|)$ bits of precision, and raises the question of if convergence in less than $poly(\kappa)$ iterations can be expected in finite precision, even for matrices with clustered, skewed, or otherwise favorable eigenvalue distributions.
On Matrices With Displacement Structure: Generalized Operators and Faster Algorithms
TLDR
This paper generalizes classical displacement operators, based on block diagonal matrices with companion diagonal blocks, and designs fast algorithms to perform the task above for this extended class of struc-tured matrices, and obtains faster Las Vegas algorithms for structured inversion and linear system solving.
...
...