• Publications
  • Influence
Improved Approximation Algorithms for Large Matrices via Random Projections
  • Tamás Sarlós
  • Mathematics, Computer Science
  • 47th Annual IEEE Symposium on Foundations of…
  • 21 October 2006
TLDR
The key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation. Expand
Towards Scaling Fully Personalized PageRank: Algorithms, Lower Bounds, and Experiments
TLDR
A novel algorithm is achieved by a novel algorithm that precomputes a compact database; using this database, it can serve online responses to arbitrary user-selected personalization and proves that for a fixed error probability the size of the database is linear in the number of web pages. Expand
Fastfood: Approximate Kernel Expansions in Loglinear Time
TLDR
Improvements to Fastfood, an approximation that accelerates kernel methods significantly and achieves similar accuracy to full kernel expansions and Random Kitchen Sinks while being 100x faster and using 1000x less memory, make kernel methods more practical for applications that have large training sets and/or require real-time prediction. Expand
Faster least squares approximation
TLDR
This work presents two randomized algorithms that provide accurate relative-error approximations to the optimal value and the solution vector of a least squares approximation problem more rapidly than existing exact algorithms. Expand
Rethinking Attention with Performers
TLDR
Performers, Transformer architectures which can estimate regular (softmax) full-rank-attention Transformers with provable accuracy, but using only linear space and time complexity, without relying on any priors such as sparsity or low-rankness are introduced. Expand
SpamRank -- Fully Automatic Link Spam Detection
TLDR
A novel method based on the concept of personalized PageRank that detects pages with an undeserved high PageRank value without the need of any kind of white or blacklists or other means of human intervention is proposed. Expand
A sparse Johnson: Lindenstrauss transform
TLDR
A sparse version of the fundamental tool in dimension reduction -- the Johnson-Lindenstrauss transform is obtained, using hashing and local densification to construct a sparse projection matrix with just ~O(1/ε) non-zero entries per column, and a matching lower bound on the sparsity for a large class of projection matrices is shown. Expand
On scheduling in map-reduce and flow-shops
TLDR
This work formalizes job scheduling in map-reduce as a novel generalization of the two-stage classical flexible flow shop (FFS) problem: instead of a single task at each stage, a job now consists of a set of tasks per stage. Expand
On estimating the average degree
TLDR
This work considers the problem of estimating the average degree of a large network using efficient random sampling, where the number of nodes is not known to the algorithm, and proposes a new estimator that relies on access to node samples under a prescribed distribution. Expand
...
1
2
3
4
5
...