Low rank approximation and regression in input sparsity time
@inproceedings{Clarkson2013LowRA, title={Low rank approximation and regression in input sparsity time}, author={K. Clarkson and D. Woodruff}, booktitle={STOC '13}, year={2013} }
We design a new distribution over poly(r ε<sup>-1</sup>) x n matrices S so that for any fixed n x d matrix A of rank r, with probability at least 9/10, SAx<sub>2</sub> = (1 pm ε)Ax<sub>2</sub> simultaneously for all x ∈ R<sup>d</sup>. Such a matrix S is called a <i>subspace embedding</i>. Furthermore, SA can be computed in O(nnz(A)) + ~O(r<sup>2</sup>ε<sup>-2</sup>) time, where nnz(A) is the number of non-zero entries of A. This improves over all previous subspace embeddings, which required at… CONTINUE READING
Topics from this paper
Paper Mentions
Blog Post
256 Citations
OSNAP: Faster Numerical Linear Algebra Algorithms via Sparser Subspace Embeddings
- Mathematics, Computer Science
- 2013 IEEE 54th Annual Symposium on Foundations of Computer Science
- 2013
- 246
- PDF
Subspace Embeddings and \(\ell_p\)-Regression Using Exponential Random Variables
- Mathematics, Computer Science
- COLT
- 2013
- 52
- PDF
Tighter Low-rank Approximation via Sampling the Leveraged Element
- Computer Science, Mathematics
- SODA
- 2015
- 32
- PDF
Input Sparsity Time Low-rank Approximation via Ridge Leverage Score Sampling
- Computer Science, Mathematics
- SODA
- 2017
- 78
- PDF