Erik Waingarten

Learn More
We show tight upper and lower bounds for time–space trade-offs for the c-Approximate Near Neighbor Search problem. For the d-dimensional Euclidean space and n-point datasets, we develop a data structure with space n1+ρu+o(1) + O(dn) and query time nρq+o(1) + dno(1) for every ρu, ρq ≥ 0 such that: c2 √ ρq + (c2 − 1) √ ρu = √ 2c2 − 1. (1) To illustrate these(More)
We prove a lower bound of &#206;&#169;(<i>n</i><sup>1/3</sup>) for the query complexity of any two-sided and adaptive algorithm that tests whether an unknown Boolean function <i>f</i>:{0,1}<sup><i>n</i></sup>&#226;†’ {0,1} is monotone versus far from monotone. This improves the recent lower bound of &#206;&#169;(<i>n</i><sup>1/4</sup>) for the same problem(More)
We prove that any non-adaptive algorithm that tests whether an unknown Boolean function f : {0, 1} → {0, 1} is a k-junta or -far from every k-junta must make Ω̃(k/ ) many queries for a wide range of parameters k and . Our result dramatically improves previous lower bounds from [BGSMdW13, STW15], and is essentially optimal given Blais’s non-adaptive junta(More)
We show tight lower bounds for the entire trade-off between space and query time for the Approximate Near Neighbor search problem. Our lower bounds hold in a restricted model of computation, which captures all hashing-based approaches. In particular, our lower bound matches the upper bound recently shown in [Laa15c] for the random instance on a Euclidean(More)
We show that every *symmetric* normed space admits an efficient nearest neighbor search data structure with doubly-logarithmic approximation. Specifically, for every <i>n</i>, <i>d</i> = <i>n</i><sup><i>o</i>(1)</sup>, and every <i>d</i>-dimensional symmetric norm ||Â&#183;||, there exists a data structure for (loglog<i>n</i>)-approximate nearest neighbor(More)
So far, we have been talking about multi-armed bandits where the rewards are stochastic, generated independently and identically from a fixed unknown distribution for each arm. Today, we’ll look at a different setup: adversarial rewards. Instead of there being a distribution for each arm, we assume there is a hidden sequence for each arm i, ri,1, ..., ri,T(More)
We prove that any non-adaptive algorithm that tests whether an unknown Boolean function f : {0, 1} → {0, 1} is a k-junta or -far from every k-junta must make Ω̃(k3/2/ ) many queries for a wide range of parameters k and . Our result dramatically improves previous lower bounds from [12, 38], and is essentially optimal given Blais’s non-adaptive junta tester(More)
Consider the problem of learning a parametric distribution from observations. A frequentist approach to learning considers parameters to be fixed, and uses the data learn those parameters as accurately as possible. For example, consider the problem of learning Bernoulli distribution’s parameter ( a random variable is distributed as Bernoulli(μ) is 1 with(More)
We give a poly(logn, 1/ε)-query adaptive algorithm for testing whether an unknown Boolean function f : {−1, 1}n → {−1, 1}, which is promised to be a halfspace, is monotone versus ε-far from monotone. Since non-adaptive algorithms are known to require almost Ω(n) queries to test whether an unknown halfspace is monotone versus far from monotone, this shows(More)