• Publications
  • Influence
Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix
TLDR
It is argued that the number of samples required to guarantee that with probability at least 1−Δ, the relative error in the estimate is at most &epsis; that such bounds are much more useful in applications than the variance.
Blendenpik: Supercharging LAPACK's Least-Squares Solver
TLDR
A least-squares solver for dense highly overdetermined systems that achieves residuals similar to those of direct QR factorization- based solvers, outperforms lapack by large factors, and scales significantly better than any QR-based solver.
Random Fourier Features for Kernel Ridge Regression: Approximation Bounds and Statistical Guarantees
TLDR
The results are twofold: on the one hand, it is shown that random Fourier feature approximation can provably speed up kernel ridge regression under reasonable assumptions, and on the other hand, the method is suboptimal, and sampling from a modified distribution in Fourier space, given by the leverage function of the kernel, yields provably better performance.
Quasi-Monte Carlo Feature Maps for Shift-Invariant Kernels
TLDR
A new discrepancy measure called box discrepancy is derived based on theoretical characterizations of the integration error with respect to a given sequence based on explicit box discrepancy minimization in Quasi-Monte Carlo (QMC) approximations.
Faster Subset Selection for Matrices and Applications
TLDR
It is shown that the combinatorial problem of finding a low-stretch spanning tree in an undirected graph corresponds to subset selection, and the various implications of this reduction are discussed.
Efficient and Practical Stochastic Subgradient Descent for Nuclear Norm Regularization
TLDR
Novel subgradient methods for a broad class of matrix optimization problems involving nuclear norm regularization by combining low-rank stochastic subgradients with efficient incremental SVD updates, made possible by highly optimized and parallelizable dense linear algebra operations on small matrices.
Kernel methods match Deep Neural Networks on TIMIT
TLDR
Two algorithmic schemes to address this computational bottleneck in the context of kernel ridge regression are developed and it is demonstrated that these schemes enable kernel methods to match the performance of state of the art Deep Neural Networks on TIMIT for speech recognition and classification tasks.
Faster Kernel Ridge Regression Using Sketching and Preconditioning
TLDR
This paper proposes a preconditioning technique based on random feature maps, such as random Fourier features, which have recently emerged as a powerful technique for speeding up and scaling the training of kernel-based methods by resorting to approximations.
Subspace Embeddings for the Polynomial Kernel
TLDR
This work proposes the first fast oblivious subspace embeddings that are able to embed a space induced by a non-linear kernel without explicitly mapping the data to the high-dimensional space.
Revisiting Asynchronous Linear Solvers: Provable Convergence Rate through Randomization
TLDR
A randomized shared-memory asynchronous method for general symmetric positive definite matrices is proposed and it is proved that it is linear, and is close to that of the method's synchronous counterpart if the processor count is not excessive relative to the size and sparsity of the matrix.
...
...