• Publications
  • Influence
Fast Image Deconvolution using Hyper-Laplacian Priors
TLDR
This paper describes a deconvolution approach that is several orders of magnitude faster than existing techniques that use hyper-Laplacian priors and is able to deconvolve a 1 megapixel image in less than ~3 seconds, achieving comparable quality to existing methods that take ~20 minutes. Expand
Blind deconvolution using a normalized sparsity measure
TLDR
A new type of image regularization which gives lowest cost for the true sharp image is introduced, which allows a very simple cost formulation to be used for the blind deconvolution model, obviating the need for additional methods. Expand
Domain Separation Networks
TLDR
The novel architecture results in a model that outperforms the state-of-the-art on a range of unsupervised domain adaptation scenarios and additionally produces visualizations of the private and shared representations enabling interpretation of the domain adaptation process. Expand
Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks
TLDR
This generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain, and outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Expand
Contrastive Multiview Coding
TLDR
Key properties of the multiview contrastive learning approach are analyzed, finding that the contrastive loss outperforms a popular alternative based on cross-view prediction, and that the more views the authors learn from, the better the resulting representation captures underlying scene semantics. Expand
Deconvolutional networks
TLDR
This work presents a learning framework where features that capture these mid-level cues spontaneously emerge from image data, based on the convolutional decomposition of images under a spar-sity constraint and is totally unsupervised. Expand
Contrastive Representation Distillation
TLDR
The resulting new objective outperforms knowledge distillation and other cutting-edge distillers on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. Expand
Supervised Contrastive Learning
TLDR
A novel training methodology that consistently outperforms cross entropy on supervised learning tasks across different architectures and data augmentations is proposed, and the batch contrastive loss is modified, which has recently been shown to be very effective at learning powerful representations in the self-supervised setting. Expand
Restoring an Image Taken through a Window Covered with Dirt or Rain
TLDR
This work presents a post-capture image processing solution that can remove localized rain and dirt artifacts from a single image, and demonstrates effective removal of dirt and rain in outdoor test conditions. Expand
What makes for good views for contrastive learning
TLDR
This paper uses empirical analysis to better understand the importance of view selection, and argues that the mutual information (MI) between views should be reduced while keeping task-relevant information intact, and devise unsupervised and semi-supervised frameworks that learn effective views by aiming to reduce their MI. Expand
...
1
2
3
4
5
...