• Publications
  • Influence
Towards Deep Learning Models Resistant to Adversarial Attacks
TLDR
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Unlabeled Data Improves Adversarial Robustness
TLDR
It is proved that unlabeled data bridges the complexity gap between standard and robust classification: a simple semisupervised learning procedure (self-training) achieves high robust accuracy using the same number of labels required for achieving high standard accuracy.
Do ImageNet Classifiers Generalize to ImageNet?
TLDR
The results suggest that the accuracy drops are not caused by adaptivity, but by the models' inability to generalize to slightly "harder" images than those found in the original test sets.
Practical and Optimal LSH for Angular Distance
TLDR
This work shows the existence of a Locality-Sensitive Hashing (LSH) family for the angular distance that yields an approximate Near Neighbor Search algorithm with the asymptotically optimal running time exponent and establishes a fine-grained lower bound for the quality of any LSH family for angular distance.
Adversarially Robust Generalization Requires More Data
TLDR
It is shown that already in a simple natural data model, the sample complexity of robust learning can be significantly larger than that of "standard" learning.
A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations
TLDR
It is shown that neural networks are already vulnerable to significantly simpler - and more likely to occur naturally - transformations of the inputs, and that the current neural network-based vision models might not be as reliable as the authors tend to assume.
Exploring the Landscape of Spatial Robustness
TLDR
This work thoroughly investigate the vulnerability of neural network--based classifiers to rotations and translations and finds that, in contrast to the p-norm case, first-order methods cannot reliably find worst-case perturbations.
Retiring Adult: New Datasets for Fair Machine Learning
TLDR
A suite of new datasets derived from US Census surveys that extend the existing data ecosystem for research on fair machine learning and create prediction tasks relating to income, employment, health, transportation, and housing are created.
A Nearly-Linear Time Framework for Graph-Structured Sparsity
TLDR
A framework for sparsity structures defined via graphs that is flexible and generalizes several previously studied sparsity models is introduced and achieves an information-theoretically optimal sample complexity for a wide range of parameters.
Do CIFAR-10 Classifiers Generalize to CIFAR-10?
TLDR
This work measures the accuracy of CIFAR-10 classifiers by creating a new test set of truly unseen images and finds a large drop in accuracy for a broad range of deep learning models.
...
...