• Publications
  • Influence
Deep Anomaly Detection with Outlier Exposure
We propose leveraging diverse, realistic datasets to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure. Expand
Using Self-Supervised Learning Can Improve Model Robustness and Uncertainty
We find that self-supervision can benefit robustness in a variety of ways, including robustness to adversarial examples, label corruption, and common input corruptions. Expand
Using Trusted Data to Train Deep Networks on Labels Corrupted by Severe Noise
We demonstrate that robustness to label noise up to severe strengths can be achieved by using a set of trusted data with clean labels, and propose a loss correction that utilizes trusted examples in a data-efficient manner to mitigate the effects of label noise. Expand
Using Pre-Training Can Improve Model Robustness and Uncertainty
We show that although pre-training may not improve performance on traditional classification metrics, it improves model robustness and uncertainty estimates. Expand
A Benchmark for Anomaly Segmentation
We introduce the Combined Anomalous Object Segmentation benchmark for the more realistic task of large-scale anomaly segmentation that incorporates both realism and anomaly diversity. Expand
Scaling Out-of-Distribution Detection for Real-World Settings
We set the stage for more realistic out-of-distribution detection by exploring large-scale multiclass and multi-label settings with high-resolution images and hundreds of classes. Expand
Measuring Massive Multitask Language Understanding
We propose a new test to measure a text model's multitask accuracy. Expand
The purpose of this paper is to present a largely self-contained proof of the singular value decomposition (SVD), and to explore its application to the low rank approximation problem. We begin byExpand
Measuring Coding Challenge Competence With APPS
While programming is one of the most broadly applicable skills in modern society, modern machine learning models still cannot code solutions to basic problems. It can be difficult to accuratelyExpand