Corpus ID: 222134006

A Simple Framework for Uncertainty in Contrastive Learning

@article{Wu2020ASF,
  title={A Simple Framework for Uncertainty in Contrastive Learning},
  author={Mike Wu and Noah D. Goodman},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.02038}
}
Contrastive approaches to representation learning have recently shown great promise. In contrast to generative approaches, these contrastive models learn a deterministic encoder with no notion of uncertainty or confidence. In this paper, we introduce a simple approach based on "contrasting distributions" that learns to assign uncertainty for pretrained contrastive representations. In particular, we train a deep network from a representation to a distribution in representation space, whose… Expand

Figures and Tables from this paper

Self-supervised Out-of-distribution Detection for Cardiac CMR Segmentation
TLDR
This work proposes a simple method to identify out-of-distribution (OOD) samples that does not require adapting the model architecture or access to a separate OOD dataset during training, and finds that it is more effective at detecting OOD samples than state-of theart post-hoc OOD detection and uncertainty estimation approaches. Expand
On the Practicality of Deterministic Epistemic Uncertainty
TLDR
It is found that, while DUMs scale to realistic vision tasks and perform well on OOD detection, the practicality of current methods is undermined by poor calibration under realistic distributional shifts. Expand

References

SHOWING 1-10 OF 75 REFERENCES
Meta-Amortized Variational Inference and Learning
TLDR
A doubly-amortized variational inference procedure is presented that learns transferable latent representations that generalize across several related distributions and significantly outperforms baselines on downstream image classification tasks on MNIST and NORB. Expand
On Mutual Information in Contrastive Learning for Visual Representations
TLDR
This work shows that this family of algorithms maximizes a lower bound on the mutual information between two or more "views" of an image where typical views come from a composition of image augmentations, and finds that the choice of negative samples and views are critical to the success of these algorithms. Expand
Momentum Contrast for Unsupervised Visual Representation Learning
We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and aExpand
Local Aggregation for Unsupervised Learning of Visual Embeddings
TLDR
This work describes a method that trains an embedding function to maximize a metric of local aggregation, causing similar data instances to move together in the embedding space, while allowing dissimilar instances to separate. Expand
Unsupervised Visual Representation Learning by Context Prediction
TLDR
It is demonstrated that the feature representation learned using this within-image context indeed captures visual similarity across images and allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Expand
Multimodal Generative Models for Scalable Weakly-Supervised Learning
TLDR
A multimodal variational autoencoder that uses a product-of-experts inference network and a sub-sampled training paradigm to solve the multi-modal inference problem and shares parameters to efficiently learn under any combination of missing modalities, thereby enabling weakly-supervised learning. Expand
Unsupervised Representation Learning by Predicting Image Rotations
TLDR
This work proposes to learn image features by training ConvNets to recognize the 2d rotation that is applied to the image that it gets as input, and demonstrates both qualitatively and quantitatively that this apparently simple task actually provides a very powerful supervisory signal for semantic feature learning. Expand
Improved Baselines with Momentum Contrastive Learning
TLDR
With simple modifications to MoCo, this note establishes stronger baselines that outperform SimCLR and do not require large training batches, and hopes this will make state-of-the-art unsupervised learning research more accessible. Expand
Learning deep representations by mutual information estimation and maximization
TLDR
It is shown that structure matters: incorporating knowledge about locality in the input into the objective can significantly improve a representation’s suitability for downstream tasks and is an important step towards flexible formulations of representation learning objectives for specific end-goals. Expand
LSUN: Construction of a Large-scale Image Dataset using Deep Learning with Humans in the Loop
TLDR
This work proposes to amplify human effort through a partially automated labeling scheme, leveraging deep learning with humans in the loop, and constructs a new image dataset, LSUN, which contains around one million labeled images for each of 10 scene categories and 20 object categories. Expand
...
1
2
3
4
5
...