• Publications
  • Influence
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks
TLDR
We present a simple baseline that utilizes probabilities from softmax distributions to detect if an example is misclassified or out-of-distribution. Expand
Benchmarking Neural Network Robustness to Common Corruptions and Perturbations
TLDR
In this paper we establish rigorous benchmarks for image classifier robustness. Expand
Deep Anomaly Detection with Outlier Exposure
TLDR
We propose leveraging diverse, realistic datasets to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure. Expand
Gaussian Error Linear Units (GELUs)
TLDR
We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function that weights inputs by their value, rather than their sign as in ReLUs ($x\mathbf{1}_{x>0}$). Expand
AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty
TLDR
We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions. Expand
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty
TLDR
We find that self-supervision can benefit robustness in a variety of ways, including robustness to adversarial examples, label corruption, and common input corruptions. Expand
Bridging Nonlinearities and Stochastic Regularizers with Gaussian Error Linear Units
TLDR
We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function that combines the intuitions of dropout and zoneout while respecting neuron values. Expand
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise
TLDR
We demonstrate that robustness to label noise up to severe strengths can be achieved by using a set of trusted data with clean labels, and propose a loss correction that utilizes trusted examples in a data-efficient manner to mitigate the effects of label noise. Expand
Natural Adversarial Examples
TLDR
We introduce natural adversarial examples -- real-world, unmodified, and naturally occurring examples that cause classifier accuracy to significantly degrade. Expand
The Many Faces of Robustness: A Critical Analysis of Out-of-Distribution Generalization
TLDR
We introduce three new robustness benchmarks consisting of naturally occurring distribution changes in image style, geographic location, camera operation, and more. Expand
...
1
2
3
4
...