• Publications
  • Influence
Enhancing The Reliability of Out-of-distribution Image Detection in Neural Networks
TLDR
The proposed ODIN method, based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions between in- and out-of-distribution images, allowing for more effective detection, consistently outperforms the baseline approach by a large margin.
Exploring the Limits of Weakly Supervised Pretraining
TLDR
This paper presents a unique study of transfer learning with large convolutional networks trained to predict hashtags on billions of social media images and shows improvements on several image classification and object detection tasks, and reports the highest ImageNet-1k single-crop, top-1 accuracy to date.
Snapshot Ensembles: Train 1, get M for free
TLDR
This paper proposes a method to obtain the seemingly contradictory goal of ensembling multiple neural networks at no additional training cost by training a single neural network, converging to several local minima along its optimization path and saving the model parameters.
Energy-based Out-of-distribution Detection
TLDR
This work proposes a unified framework for OOD detection that uses an energy score, and shows that energy scores better distinguish in- and out-of-distribution samples than the traditional approach using the softmax scores.
Stacked Generative Adversarial Networks
TLDR
A novel generative model named Stacked Generative Adversarial Networks (SGAN), which is trained to invert the hierarchical representations of a bottom-up discriminative network, which is able to generate images of much higher quality than GANs without stacking.
Convergent Learning: Do different neural networks learn the same representations?
TLDR
This paper investigates the extent to which neural networks exhibit convergent learning, which is when the representations learned by multiple nets converge to a set of features which are either individually similar between networks or where subsets of features span similar low-dimensional spaces.
Principled Detection of Out-of-Distribution Examples in Neural Networks
TLDR
ODIN is proposed, a simple and effective out-of-distribution detector for neural networks, that does not require any change to a pre-trained model, and is based on the observation that using temperature scaling and adding small perturbations to the input can separate the softmax score distributions of in and out of distribution samples, allowing for more effective detection.
Uncovering the Small Community Structure in Large Networks: A Local Spectral Approach
TLDR
A novel approach for finding overlapping communities called LEMON (Local Expansion via Minimum One Norm), which different from PageRank-like diffusion methods finds the community by seeking a sparse vector in the span of the local spectra such that the seeds are in its support.
Defense Against Adversarial Images Using Web-Scale Nearest-Neighbor Search
TLDR
This work hypotheses that adversarial perturbations move the image away from the image manifold in the sense that there exists no physical process that could have produced the adversarial image and proposes two novel attack methods to break nearest-neighbor defense settings.
Detecting Overlapping Communities from Local Spectral Subspaces
TLDR
A systematic investigation on LOSP is provided, and it is demonstrated that LOSP outperforms the Heat Kernel and PageRank diffusions on large real world networks across multiple domains and addresses the problem of multiple membership identification.
...
...