# Fast linear algebra is stable

@article{Demmel2007FastLA, title={Fast linear algebra is stable}, author={James Demmel and Ioana Dumitriu and Olga Holtz}, journal={Numerische Mathematik}, year={2007}, volume={108}, pages={59-91} }

In Demmel et al. (Numer. Math. 106(2), 199–224, 2007) we showed that a large class of fast recursive matrix multiplication algorithms is stable in a normwise sense, and that in fact if multiplication of n-by-n matrices can be done by any algorithm in O(nω+η) operations for any η > 0, then it can be done stably in O(nω+η) operations for any η > 0. Here we extend this result to show that essentially all standard linear algebra operations, including LU decomposition, QR decomposition, linear…

## 208 Citations

### Fast matrix multiplication is stable

- Computer ScienceNumerische Mathematik
- 2007

It is shown that the exponent of matrix multiplication (the optimal running time) can be achieved by numerically stable algorithms, and new group-theoretic algorithms proposed in Cohn and Umans, and Cohn et al. are all included in the class of algorithms to which the analysis applies.

### Minimizing Communication in Numerical Linear Algebra

- Computer ScienceSIAM J. Matrix Anal. Appl.
- 2011

This work generalizes a lower bound on the amount of communication needed to perform dense, n-by-n matrix multiplication using the conventional O(n3) algorithm to a much wider variety of algorithms, including LU factorization, Cholesky factors, LDLT factors, QR factors, the Gram–Schmidt algorithm, and algorithms for eigenvalues and singular values.

### Work-efficient matrix inversion in polylogarithmic time

- Computer ScienceSPAA
- 2013

Preliminary experiments on multicore machines give the surprising result that even on such moderately parallel machines the algorithm outperforms Intel's Math Kernel Library and that Strassen's algorithm seems to be numerically more stable than one might expect.

### Optimal algorithms for linear algebra by quantum inspiration

- Computer ScienceArXiv
- 2013

Being derived from quantum intuition, the proposed algorithm is completely disjoint from all previous classical algorithms, and builds on a combination of low-discrepancy sequences and perturbation analysis, and hopes it motivates further exploration of quantum techniques in this respect, hopefully leading to improvements in the understanding of space complexity and numerical stability of these problems.

### A Quasi-Random Approach to Matrix Spectral

- Computer Science, Mathematics
- 2018

This work develops a completely new, efficient and stable, parallel algorithm to compute an approximate spectral decomposition of any Hermitian matrix, relying on the theory of low-discrepancy or quasi-random sequences a theory which has not been connected thus far to linear algebra problems.

### Minimizing Communication in Linear Algebra

- Computer Science, MathematicsArXiv
- 2009

This work shows how to extend known communication lower bounds for O(n) dense matrix multiplication to all direct linear algebra, i.e. for solving linear systems, least squares problems, eigenproblems and the SVD, for dense or sparse matrices, and for sequential or parallel machines.

### Near-Optimal Algorithms for Linear Algebra in the Current Matrix Multiplication Time

- Computer Science, MathematicsSODA
- 2022

This work shows how to bypass the main open question regarding the logarithmic factors in the sketching dimension of existing oblivious subspace embeddings that achieve constant-factor approximation, using a refined sketching technique, and obtain optimal or nearly optimal bounds for these problems.

### Randomized numerical linear algebra: Foundations and algorithms

- Computer ScienceActa Numerica
- 2020

This survey describes probabilistic algorithms for linear algebraic computations, such as factorizing matrices and solving linear systems, that have a proven track record for real-world problems and treats both the theoretical foundations of the subject and practical computational issues.

### The Quasi-Random Perspective on Matrix Spectral Analysis with Applications

- Computer Science, MathematicsArXiv
- 2015

This work analyzes the discrepancy of an n-dimensional sequence formed by taking the fractional part of integer multiples of the vector of eigenvalues of the input matrix, and gives rise to a conceptually new algorithm to compute an approximate spectral decomposition of any n x n Hermitian matrix.

### Fast and stable randomized low-rank matrix approximation

- Computer ScienceArXiv
- 2020

This work studies a generalization of Nystr{o}m method applicable to general matrices, and shows that it has near-optimal approximation quality comparable to competing methods and can significantly outperform state-of-the-art methods.

## References

SHOWING 1-10 OF 70 REFERENCES

### Fast matrix multiplication is stable

- Computer ScienceNumerische Mathematik
- 2007

It is shown that the exponent of matrix multiplication (the optimal running time) can be achieved by numerically stable algorithms, and new group-theoretic algorithms proposed in Cohn and Umans, and Cohn et al. are all included in the class of algorithms to which the analysis applies.

### Parallel Algorithm for Solving Some Spectral Problems of Linear Algebra

- Mathematics, Computer Science
- 1993

### An inverse free parallel spectral divide and conquer algorithm for nonsymmetric eigenproblems

- Computer Science
- 1997

An inverse-free, highly parallel, spectral divide and conquer algorithm that can compute either an invariant subspace of a nonsymmetric matrix, or a pair of left and right deflating subspaces of a regular matrix pencil.

### A group-theoretic approach to fast matrix multiplication

- Mathematics44th Annual IEEE Symposium on Foundations of Computer Science, 2003. Proceedings.
- 2003

A new, group-theoretic approach to bounding the exponent of matrix multiplication is developed, including a proof that certain families of groups of order n/sup 2+o(1)/ support n /spl times/ n matrix multiplication.

### Stability of block algorithms with fast level-3 BLAS

- Computer ScienceTOMS
- 1992

The numerical stability of the block algorithms in the new linear algebra program library LAPACK is investigated and it is shown that these algorithms have backward error analyses in which the backward error bounds are commensurate with the error bounds for the underlying level-3 BLAS (BLAS3).

### Gaussian elimination is not optimal

- Mathematics
- 1969

t. Below we will give an algorithm which computes the coefficients of the product of two square matrices A and B of order n from the coefficients of A and B with tess than 4 . 7 n l°g7 arithmetical…

### Stability of Parallel Triangular System Solvers

- Mathematics, Computer ScienceSIAM J. Sci. Comput.
- 1995

A forward error bound is identified that holds not only for all the methods described here, but for any triangular equation solver that does not rely on algebraic cancellation; among the implications of the bound is that any such method is extremely accurate for certain special types of triangular systems.

### Stability of block LU factorization

- Computer ScienceNumer. Linear Algebra Appl.
- 1995

It is shown here that block LU factorization is stable if A is block diagonally dominant by columns and the level of instability in blockLU factorization can be bounded in terms of the condition number K(A) and the growth factor for Gaussian elimination without pivoting.

### Using the Matrix Sign Function to Compute Invariant Subspaces

- Computer Science
- 1998

A new perturbation theory for the matrix sign function, the conditioning of its computation, the numerical stability of the divide-and-conquer algorithm, and iterative refinement schemes are presented.