Near-Optimal Algorithms for Linear Algebra in the Current Matrix Multiplication Time

@inproceedings{Chepurko2022NearOptimalAF,
  title={Near-Optimal Algorithms for Linear Algebra in the Current Matrix Multiplication Time},
  author={Nadiia Chepurko and Kenneth L. Clarkson and Praneeth Kacham and David P. Woodruff},
  booktitle={SODA},
  year={2022}
}
In the numerical linear algebra community, it was suggested that to obtain nearly optimal bounds for various problems such as rank computation, finding a maximal linearly independent subset of columns (a basis), regression, or low-rank approximation, a natural way would be to resolve the main open question of Nelson and Nguyen (FOCS, 2013). This question is regarding the logarithmic factors in the sketching dimension of existing oblivious subspace embeddings that achieve constant-factor… 
Approximate Euclidean lengths and distances beyond Johnson-Lindenstrauss
TLDR
An algorithm to estimate the Euclidean lengths of the rows of a matrix and proves element-wise probabilistic bounds that are at least as good as standard JL approximations in the worst-case, but are asymptotically better for matrices with decaying spectrum.
pylspack: Parallel algorithms and data structures for sketching, column subset selection, regression and leverage scores
TLDR
This work provides a detailed analysis of the ubiquitous CountSketch transform and its combination with Gaussian random projections, accounting for memory requirements, computational complexity and workload balancing.
Dynamic Least-Squares Regression
TLDR
This work revisit the canonical problem of dynamic least-squares regression (LSR), where the goal is to learn a linear model over incremental training data, and presents a dynamic data structure which maintains an arbitrarily small constant approximate solution to dynamic LSR with amortized update time O(d), almost matching the running time of the static (sketching-based) solution.

References

SHOWING 1-10 OF 38 REFERENCES
Optimal Approximate Matrix Product in Terms of Stable Rank
We prove, using the subspace embedding guarantee in a black box way, that one can achieve the spectral norm guarantee for approximate matrix multiplication with a dimensionality-reducing map having
Nearly Tight Oblivious Subspace Embeddings by Trace Inequalities
TLDR
This analysis of sparse oblivious subspace embeddings is presented, based on the "matrix Chernoff" technique, and the bounds obtained are much tighter than previous ones, matching known lower bounds up to a single log(d) factor in embedding dimension.
Improved Approximation Algorithms for Large Matrices via Random Projections
  • Tamás Sarlós
  • Computer Science
    2006 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS'06)
  • 2006
TLDR
The key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation.
Sketching as a Tool for Numerical Linear Algebra
TLDR
This survey highlights the recent advances in algorithms for numericallinear algebra that have come from the technique of linear sketching, and considers least squares as well as robust regression problems, low rank approximation, and graph sparsification.
Toward a Unified Theory of Sparse Dimensionality Reduction in Euclidean Space
TLDR
This work qualitatively unify several results related to the Johnson-Lindenstrauss lemma, subspace embeddings, and Fourier-based restricted isometries and introduces a new complexity parameter, which depends on the geometry of T, and shows that it suffices to choose s and m such that this parameter is small.
Low-Rank Approximation and Regression in Input Sparsity Time
We design a new distribution over m × n matrices S so that, for any fixed n × d matrix A of rank r, with probability at least 9/10, ∥SAx∥2 = (1 ± ε)∥Ax∥2 simultaneously for all x ∈ Rd. Here, m is
A Simpler Approach to Matrix Completion
  • B. Recht
  • Computer Science
    J. Mach. Learn. Res.
  • 2011
TLDR
This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low-rank matrix by minimizing the nuclear norm of the hidden matrix subject to agreement with the provided entries.
Input Sparsity Time Low-rank Approximation via Ridge Leverage Score Sampling
We present a new algorithm for finding a near optimal low-rank approximation of a matrix $A$ in $O(nnz(A))$ time. Our method is based on a recursive sampling scheme for computing a representative
Dimensionality Reduction for k-Means Clustering and Low Rank Approximation
TLDR
This work shows how to approximate a data matrix A with a much smaller sketch ~A that can be used to solve a general class of constrained k-rank approximation problems to within (1+ε) error, and gives a simple alternative to known algorithms that has applications in the streaming setting.
Improved Matrix Algorithms via the Subsampled Randomized Hadamard Transform
TLDR
This article addresses the efficacy, in the Frobenius and spectral norms, of an SRHT-based low-rank matrix approximation technique introduced by Woolfe, Liberty, Rohklin, and Tygert, and produces several results on matrix operations with SRHTs that may be of independent interest.
...
...