• Corpus ID: 50783352

Domain Stylization: A Strong, Simple Baseline for Synthetic to Real Image Domain Adaptation

@article{Dundar2018DomainSA,
  title={Domain Stylization: A Strong, Simple Baseline for Synthetic to Real Image Domain Adaptation},
  author={Aysegul Dundar and Ming-Yu Liu and Ting-Chun Wang and John Zedlewski and Jan Kautz},
  journal={ArXiv},
  year={2018},
  volume={abs/1807.09384}
}
Deep neural networks have largely failed to effectively utilize synthetic data when applied to real images due to the covariate shift problem. In this paper, we show that by applying a straightforward modification to an existing photorealistic style transfer algorithm, we achieve state-of-the-art synthetic-to-real domain adaptation results. We conduct extensive experimental validations on four synthetic-to-real tasks for semantic segmentation and object detection, and show that our approach… 

Figures and Tables from this paper

CrDoCo: Pixel-Level Domain Transfer With Cross-Domain Consistency

A novel pixel-wise adversarial domain adaptation algorithm that leverages image-to-image translation methods for data augmentation and introduces a cross-domain consistency loss that enforces the adapted model to produce consistent predictions.

Into the wild: a study in rendered synthetic data and domain adaptation methods

This paper trains a deep neural network with synthetic imagery, including ordnance and overhead ship imagery, and investigates a variety of methods to adapt the model to a dataset of real images.

Sampling/Importance Resampling for Semantically Consistent Synthetic to Real Image Domain Adaptation in Urban Traffic Scenes

This work demonstrates how using adversarial training alone can introduce semantic inconsistencies in refined images and suggests leveraging available semantic labels from target domain using naive re-sampling approach alongside with adversarial loss.

Semantically Adaptive Image-to-image Translation for Domain Adaptation of Semantic Segmentation

This paper rethink the generative model to enforce this assumption and strengthen the connection between pixel-level and feature-level domain alignment and shows that the results on the synthetic-to-real benchmarks surpass the state-of-the-art.

Sensor Transfer: Learning Optimal Sensor Effect Image Augmentation for Sim-to-Real Domain Adaptation

This work provides experiments which demonstrate that augmenting synthetic training datasets with the proposed learned augmentation framework reduces the domain gap between synthetic and real domains for object detection in urban driving scenes.

Semi-supervised domain adaptation with CycleGAN guided by a downstream task loss

A modular semi-supervised domain adaptation method for semantic segmentation by training a downstream task aware CycleGAN while refraining from adapting the synthetic semantic segmentations expert and the demonstration that the method is applicable to complex domain adaptation tasks.

Learning From Synthetic Images via Active Pseudo-Labeling

In this paper, a novel framework, namely Active Pseudo-Labeling (APL), is proposed to reduce the domain gaps between synthetic images and real images and predicts pseudo-labels for the unlabeled real images in the target domain by actively adapting the style of the real images to source domain.

SRC3: A Video Dataset for Evaluating Domain Mismatch

New video datasets to investigate the gaps between synthetic and real imagery in object detection and depth estimation and create Synthetic-Real Counterpart 3 (SRC3), which contains multiple datasets with varying levels of scene and object complexity.

An Adversarial Training based Framework for Depth Domain Adaptation

This paper uses a cyclic loss together with an adversarial loss to bring the two domains of synthetic and real depth images closer by translating synthetic images to real domain, and demonstrates the usefulness of synthetic images modified this way for training deep neural networks that can perform well on real images.

Self-Ensembling With GAN-Based Data Augmentation for Domain Adaptation in Semantic Segmentation

A self-ensembling technique based on Generative Adversarial Networks, which is computationally efficient and effective to facilitate domain alignment and outperforms state-of-the-art semantic segmentation methods on unsupervised domain adaptation benchmarks.
...

References

SHOWING 1-10 OF 42 REFERENCES

FCNs in the Wild: Pixel-level Adversarial and Constraint-based Adaptation

This paper introduces the first domain adaptive semantic segmentation method, proposing an unsupervised adversarial approach to pixel prediction problems, and outperforms baselines across different settings on multiple large-scale datasets.

High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.

The SYNTHIA Dataset: A Large Collection of Synthetic Images for Semantic Segmentation of Urban Scenes

This paper generates a synthetic collection of diverse urban images, named SYNTHIA, with automatically generated class annotations, and conducts experiments with DCNNs that show how the inclusion of SYnTHIA in the training stage significantly improves performance on the semantic segmentation task.

CyCADA: Cycle-Consistent Adversarial Domain Adaptation

A novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model that adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs is proposed.

Rethinking Atrous Convolution for Semantic Image Segmentation

The proposed `DeepLabv3' system significantly improves over the previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.

Unsupervised Cross-Domain Image Generation

The Domain Transfer Network (DTN) is presented, which employs a compound loss function that includes a multiclass GAN loss, an f-constancy component, and a regularizing component that encourages G to map samples from T to themselves.

Image-to-Image Translation with Conditional Adversarial Networks

Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

Geodesic flow kernel for unsupervised domain adaptation

This paper proposes a new kernel-based method that takes advantage of low-dimensional structures that are intrinsic to many vision datasets, and introduces a metric that reliably measures the adaptability between a pair of source and target domains.

Learning from Simulated and Unsupervised Images through Adversarial Training

This work develops a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors, and makes several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training.

Deep Domain Confusion: Maximizing for Domain Invariance

This work proposes a new CNN architecture which introduces an adaptation layer and an additional domain confusion loss, to learn a representation that is both semantically meaningful and domain invariant and shows that a domain confusion metric can be used for model selection to determine the dimension of an adaptationlayer and the best position for the layer in the CNN architecture.