• Publications
  • Influence
Equality of Opportunity in Supervised Learning
TLDR
This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.
Compressed Sensing using Generative Models
TLDR
This work shows how to achieve guarantees similar to standard compressed sensing but without employing sparsity at all, and proves that, if G is L-Lipschitz, then roughly O(k log L) random Gaussian measurements suffice for an l2/l2 recovery guarantee.
Simple and practical algorithm for sparse Fourier transform
TLDR
This work considers the sparse Fourier transform problem, and proposes a new algorithm, which leverages techniques from digital signal processing, notably Gaussian and Dolph-Chebyshev filters, and is faster than FFT, both in theory and practice.
Nearly optimal sparse fourier transform
TLDR
If one assumes that the Fast Fourier Transform is optimal, the algorithm for the exactly k-sparse case is optimal for any k = nΩ(1), and the first known algorithms that satisfy this property are shown.
The Noisy Power Method: A Meta Algorithm with Applications
TLDR
A new robust convergence analysis of the well-known power method for computing the dominant singular vectors of a matrix that is called the noisy power method is provided and shows that the error dependence of the algorithm on the matrix dimension can be replaced by an essentially tight dependence on the coherence of the matrix.
AmbientGAN: Generative models from lossy measurements
TLDR
This work considers the task of learning an implicit generative model given only lossy measurements of samples from the distribution of interest, and proposes a new method of training Generative Adversarial Networks (GANs) which is called AmbientGAN.
Adversarial examples from computational constraints
TLDR
This work proves that, for a broad set of classification tasks, the mere existence of a robust classifier implies that it can be found by a possibly exponential-time algorithm with relatively few training examples and gives an exponential separation between classical learning and robust learning in the statistical query model.
Lower bounds for sparse recovery
TLDR
The bound holds even for the more general version of the problem, where <i>A</i> is a random variable, and the recovery algorithm is required to work for any fixed x with constant probability (over <i>, and the bound is tight.
Compressed Sensing with Deep Image Prior and Learned Regularization
TLDR
It is proved that single-layer DIP networks with constant fraction over-parameterization will perfectly fit any signal through gradient descent, despite being a non-convex problem, which provides justification for early stopping.
Tight Bounds for Learning a Mixture of Two Gaussians
TLDR
The main results are upper and lower bounds giving a computationally efficient moment-based estimator with an optimal convergence rate, thus resolving a problem introduced by Pearson (1894).
...
1
2
3
4
5
...