• Publications
  • Influence
Equality of Opportunity in Supervised Learning
TLDR
We propose a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features. Expand
  • 1,370
  • 278
  • PDF
Compressed Sensing using Generative Models
TLDR
The goal of compressed sensing is to estimate a vector from an underdetermined system of linear measurements, by making use of prior knowledge on the structure of vectors in the relevant domain. Expand
  • 330
  • 68
  • PDF
Simple and practical algorithm for sparse Fourier transform
TLDR
We consider the sparse Fourier transform problem: given a complex vector x of length n, and a parameter k, estimate the k largest (in magnitude) coefficients of the Fourier Transform of x. Expand
  • 298
  • 38
  • PDF
Nearly optimal sparse fourier transform
TLDR
We consider the problem of computing the k-sparse approximation to the discrete Fourier transform of an n-dimensional signal. Expand
  • 276
  • 33
  • PDF
The Noisy Power Method: A Meta Algorithm with Applications
TLDR
We provide a new robust convergence analysis of the well-known power method for computing the dominant singular vectors of a matrix when a significant amount noise is introduced after each matrix-vector multiplication. Expand
  • 121
  • 20
  • PDF
AmbientGAN: Generative models from lossy measurements
TLDR
We show that the true underlying distribution can be provably recovered even in the presence of per-sample information loss for a class of measurement models. Expand
  • 87
  • 15
Adversarial examples from computational constraints
TLDR
We show that, for a broad set of classification tasks, the mere existence of a robust classifier implies that it can be found by a possibly exponential-time algorithm with relatively few training examples. Expand
  • 144
  • 9
  • PDF
Compressed Sensing with Deep Image Prior and Learned Regularization
TLDR
We propose a novel method for compressed sensing recovery using untrained deep generative models. Expand
  • 80
  • 9
  • PDF
Lower bounds for sparse recovery
TLDR
We consider the following <i>k</i>-sparse recovery problem: design an<i>Ax</i>, such that for any signal < i>x</i>> we can efficiently recover x satisfying ||<sub>i</sub> -- x||<sub><i>C</i></sub> ≤ <i>[sub>1</sub>. Expand
  • 179
  • 8
  • PDF
Tight Bounds for Learning a Mixture of Two Gaussians
TLDR
We consider the problem of identifying the parameters of an unknown mixture of two arbitrary d-dimensional gaussians from a sequence of independent random samples. Expand
  • 68
  • 8
  • PDF
...
1
2
3
4
5
...