Generate to Adapt: Aligning Domains Using Generative Adversarial Networks

@article{Sankaranarayanan2018GenerateTA,
  title={Generate to Adapt: Aligning Domains Using Generative Adversarial Networks},
  author={Swami Sankaranarayanan and Yogesh Balaji and Carlos D. Castillo and Rama Chellappa},
  journal={2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  year={2018},
  pages={8503-8512}
}
Domain Adaptation is an actively researched problem in Computer Vision. [] Key Result Our method achieves state-of-the art performance in most experimental settings and by far the only GAN-based method that has been shown to work well across different datasets such as OFFICE and DIGITS.

Figures and Tables from this paper

Learning from Synthetic Data: Addressing Domain Shift for Semantic Segmentation
TLDR
This work proposes an approach based on Generative Adversarial Networks (GANs) that brings the embeddings closer in the learned feature space and can achieve state of the art results on two challenging scenarios of synthetic to real domain adaptation.
Few-Shot Adversarial Domain Adaptation
TLDR
This work provides a framework for addressing the problem of supervised domain adaptation with deep models by carefully designing a training scheme whereby the typical binary adversarial discriminator is augmented to distinguish between four different classes.
Conditional Coupled Generative Adversarial Networks for Zero-Shot Domain Adaptation
TLDR
This paper tackles the challenging zero-shot domain adaptation (ZSDA) problem, where the target-domain data is non-available in the training stage, by extending the coupled generative adversarial networks (CoGAN) into a conditioning model and demonstrates that the proposed CoCoGAN outperforms existing state of the arts in image classifications.
Learning Cross-Domain Disentangled Deep Representation with Supervision from A Single Domain
TLDR
A novel deep learning architecture with disentanglement ability is presented, which observes cross- domain image data and derives latent features with the underly- ing factors and is able to address cross-domain feature disentangledlement with only the (attribute) supervision from the source-domain data.
Triplet Loss Network for Unsupervised Domain Adaptation
TLDR
TripNet is proposed, a method for unsupervised domain adaptation that harnesses the idea of a discriminator and Linear Discriminant Analysis to push the encoder to generate domain-invariant features that are category-informative.
A Generative Adversarial Distribution Matching Framework for Visual Domain Adaptation
TLDR
This paper proposes a novel generalized framework for adversarial domain adaptation, referred to as Generative Adversarial Distribution Matching, which can well outperform several state-of-the-art methods for cross-domain image classification problems.
Unsupervised Domain Adaptation using Deep Networks with Cross-Grafted Stacks
TLDR
This paper presents a novel approach that bridges the domain gap by projecting the source and target domains into a common association space through an unsupervised "cross-grafted representation stacking" (CGRS) mechanism.
Adversarial Code Learning for Image Generation
TLDR
This work introduces the "adversarial code learning" (ACL) module that improves overall image generation performance to several types of deep models and allows flexibility to convert non-generative models like Autoencoders and standard classification models to decent generative models.
Generating Target Image-Label Pairs for Unsupervised Domain Adaptation
TLDR
This work proposes an unsupervised domain adaptation method by generating image-label pairs in the target domain, in which the model is augmented with the generated target pairs and achieve class-level transfer.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 57 REFERENCES
Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks
TLDR
This generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain, and outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins.
Adversarial Discriminative Domain Adaptation
TLDR
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.
Deep Transfer Learning with Joint Adaptation Networks
TLDR
JAN is presented, which learn a transfer network by aligning the joint distributions of multiple domain-specific layers across domains based on a joint maximum mean discrepancy (JMMD) criterion.
Coupled Generative Adversarial Networks
TLDR
This work proposes coupled generative adversarial network (CoGAN), which can learn a joint distribution without any tuple of corresponding images, and applies it to several joint distribution learning tasks, and demonstrates its applications to domain adaptation and image transformation.
Conditional Generative Adversarial Nets
TLDR
The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels.
Unsupervised Cross-Domain Image Generation
TLDR
The Domain Transfer Network (DTN) is presented, which employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves.
Generative Adversarial Nets
We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a
Synthetic to Real Adaptation with Generative Correlation Alignment Networks
TLDR
A Deep Generative Correlation Alignment Network (DGCAN) is proposed to synthesize images using a novel domain adaption algorithm that leverages a shape preserving loss and a low level statistic matching loss to minimize the domain discrepancy between synthetic and real images in deep feature space.
Synthetic to Real Adaptation with Deep Generative Correlation Alignment Networks
TLDR
The Deep Generative Correlation Alignment Network (DGCAN) is proposed to synthesize training images using a novel domain adaption algorithm that leverages the ` and the correlation alignment (CORAL) losses to minimize the domain discrepancy between generated and real images in deep feature space.
Learning Transferable Features with Deep Adaptation Networks
TLDR
A new Deep Adaptation Network (DAN) architecture is proposed, which generalizes deep convolutional neural network to the domain adaptation scenario and can learn transferable features with statistical guarantees, and can scale linearly by unbiased estimate of kernel embedding.
...
1
2
3
4
5
...