• Corpus ID: 238857258

Inverse Problems Leveraging Pre-trained Contrastive Representations

@article{Ravula2021InversePL,
  title={Inverse Problems Leveraging Pre-trained Contrastive Representations},
  author={Sriram Ravula and Georgios Smyrnis and Matt Jordan and Alexandros G. Dimakis},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.07439}
}
We study a new family of inverse problems for recovering representations of corrupted data. We assume access to a pre-trained representation learning network R(x) that operates on clean images, like CLIP. The problem is to recover the representation of an image R(x), if we are only given a corrupted version A(x), for some known forward operator A. We propose a supervised inversion method that uses a contrastive objective to obtain excellent representations for highly corrupted images. Using a… 
DenseCLIP: Extract Free Dense Labels from CLIP
TLDR
The finding suggests that DenseCLIP can serve as a new reliable source of supervision for dense prediction tasks to achieve annotation-free segmentation, specifically in semantic segmentation of Contrastive Language-Image Pre-training models.

References

SHOWING 1-10 OF 49 REFERENCES
Invertible generative models for inverse problems: mitigating representation error and dataset bias
TLDR
It is demonstrated that invertible neural networks, which have zero representation error by design, can be effective natural signal priors at inverse problems such as denoising, compressive sensing, and inpainting.
Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks
TLDR
This paper proposes an untrained simple image model, called the deep decoder, which is a deep neural network that can generate natural images from very few weight parameters, with a simple architecture with no convolutions and fewer weight parameters than the output dimensionality.
Contrastive Learning with Adversarial Examples
TLDR
A new family of adversarial examples for constrastive learning is introduced and used to define a new adversarial training algorithm for SSL, denoted as CLAE, which improves the performance of several existing CL baselines on multiple datasets.
Data-Efficient Image Recognition with Contrastive Predictive Coding
TLDR
This work revisit and improve Contrastive Predictive Coding, an unsupervised objective for learning such representations which make the variability in natural signals more predictable, and produces features which support state-of-the-art linear classification accuracy on the ImageNet dataset.
Task-Aware Compressed Sensing with Generative Adversarial Networks
TLDR
This paper uses Generative Adversarial Networks (GANs) to impose structure in compressed sensing problems, replacing the usual sparsity constraint, and proposes to train the GANs in a task-aware fashion, specifically for reconstruction tasks.
Deep Learning Techniques for Inverse Problems in Imaging
TLDR
A taxonomy that can be used to categorize different problems and reconstruction methods in deep neural networks and discusses the tradeoffs associated with these different reconstruction approaches, caveats and common failure modes.
Fast and Provable ADMM for Learning with Generative Priors
TLDR
This work proposes a (linearized) Alternating Direction Method-of-Multipliers (ADMM) algorithm for minimizing a convex function subject to a nonconvex constraint, which can efficiently handle non-smooth objectives as well as exploit efficient partial minimization procedures, thus being faster in many practical scenarios.
Contrasting Contrastive Self-Supervised Representation Learning Models
TLDR
This paper analyzes contrastive approaches as one of the most successful and popular variants of self-supervised representation learning and examines over 700 training experiments including 30 encoders, 4 pre-training datasets and 20 diverse downstream tasks.
Representation Learning with Contrastive Predictive Coding
TLDR
This work proposes a universal unsupervised learning approach to extract useful representations from high-dimensional data, which it calls Contrastive Predictive Coding, and demonstrates that the approach is able to learn useful representations achieving strong performance on four distinct domains: speech, images, text and reinforcement learning in 3D environments.
Momentum Contrast for Unsupervised Visual Representation Learning
We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a
...
1
2
3
4
5
...