• Publications
  • Influence
Analysis of Boolean Functions
TLDR
This text gives a thorough overview of Boolean functions, beginning with the most basic definitions and proceeding to advanced topics such as hypercontractivity and isoperimetry, and includes a "highlight application" such as Arrow's theorem from economics. Expand
Optimal inapproximability results for MAX-CUT and other 2-variable CSPs?
TLDR
Though it is unable to prove the majority is stablest conjecture, some partial results are enough to imply that MAX-CUT is hard to (3/4 + 1/(2/spl pi/) + /spl epsi/)-approximate (/spl ap/ .909155), assuming only the unique games conjecture. Expand
Optimal Inapproximability Results for MAX-CUT and Other 2-Variable CSPs?
TLDR
This paper shows a reduction from the Unique Games problem to the problem of approximating MAX-CUT to within a factor of $\alpha_{\text{\tiny{GW}}} + \epsilon$ for all $\ep silon > 0$, and indicates that the geometric nature of the Goemans-Williamson algorithm might be intrinsic to the MAX- CUT problem. Expand
Noise stability of functions with low influences: Invariance and optimality
TLDR
An invariance principle for multilinear polynomials with low influences and bounded degree is proved; it shows that under mild conditions the distribution of such polynmials is essentially invariant for all product spaces. Expand
Learning juntas
TLDR
The algorithm and analysis exploit new structural properties of Boolean functions and obtain the first polynomial factor improvement on the naive n-k time bound which can be achieved via exhaustive search. Expand
Optimal Lower Bounds for Locality-Sensitive Hashing (Except When q is Tiny)
TLDR
The “optimal” lower bound for Locality-Sensitive Hashing (LSH) must be at least 1/<i>c</i> (minus <i>o</i><sub>d</sub>(1) is shown, following almost immediately from the observation that the noise stability of a boolean function at time <i-t</i) is a log-convex function of <i*t</ i. Expand
New degree bounds for polynomial threshold functions
TLDR
The upper bounds for Boolean formulas yield the first known subexponential time learning algorithms for formulas of superconstant depth and the first new degree lower bounds since 1968 are given, improving results of Minsky and Papert. Expand
Optimal mean-based algorithms for trace reconstruction
TLDR
For any constant deletion rate 0 < Ω < 1, a mean-based algorithm is given that uses exp(O(n1/3) time and traces; it is proved that any mean- based algorithm must use at least exp(Ω(n 1/3)) traces; and a surprising result is found: for deletion probabilities δ > 1/2, the presence of insertions can actually help with trace reconstruction. Expand
Learning intersections and thresholds of halfspaces
We give the first polynomial time algorithm to learn any function of a constant number of halfspaces under the uniform distribution to within any constant error parameter. We also give the firstExpand
...
1
2
3
4
5
...