• Publications
  • Influence
Equality of Opportunity in Supervised Learning
TLDR
This work proposes a criterion for discrimination against a specified sensitive attribute in supervised learning, where the goal is to predict some target based on available features and shows how to optimally adjust any learned predictor so as to remove discrimination according to this definition.
Understanding deep learning requires rethinking generalization
TLDR
These experiments establish that state-of-the-art convolutional networks for image classification trained with stochastic gradient methods easily fit a random labeling of the training data, and confirm that simple depth two neural networks already have perfect finite sample expressivity.
Fairness through awareness
TLDR
A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.
Train faster, generalize better: Stability of stochastic gradient descent
We show that parametric models trained by a stochastic gradient method (SGM) with few iterations have vanishing generalization error. We prove our results by arguing that SGM is algorithmically
Sanity Checks for Saliency Maps
TLDR
It is shown that some existing saliency methods are independent both of the model and of the data generating process, and methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model.
A Simple and Practical Algorithm for Differentially Private Data Release
TLDR
A new algorithm for differentially private data release, based on a simple combination of the Multiplicative Weights update rule with the Exponential Mechanism, which achieves what are the best known and nearly optimal theoretical guarantees while being simple to implement and experimentally more accurate on actual data sets than existing techniques.
On the geometry of differential privacy
TLDR
The lower bound is strong enough to separate the concept of differential privacy from the notion of approximate differential privacy where an upper bound of O(√{d}/ε) can be achieved.
Preserving Statistical Validity in Adaptive Data Analysis
TLDR
It is shown that, surprisingly, there is a way to estimate an exponential in n number of expectations accurately even if the functions are chosen adaptively, and this gives an exponential improvement over standard empirical estimators that are limited to a linear number of estimates.
Avoiding Discrimination through Causal Reasoning
TLDR
This work crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion and put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.
A Multiplicative Weights Mechanism for Privacy-Preserving Data Analysis
TLDR
A new differentially private multiplicative weights mechanism for answering a large number of interactive counting (or linear) queries that arrive online and may be adaptively chosen, and it is shown that when the input database is drawn from a smooth distribution — a distribution that does not place too much weight on any single data item — accuracy remains as above, and the running time becomes poly-logarithmic in the data universe size.
...
1
2
3
4
5
...