Learning from Simulated and Unsupervised Images through Adversarial Training

@article{Shrivastava2017LearningFS,
  title={Learning from Simulated and Unsupervised Images through Adversarial Training},
  author={Ashish Shrivastava and Tomas Pfister and Oncel Tuzel and Joshua M. Susskind and Wenda Wang and Russ Webb},
  journal={2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2017},
  pages={2242-2251}
}
With recent progress in graphics, it has become more tractable to train models on synthetic images, potentially avoiding the need for expensive annotations. [] Key Method We develop a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors.

Improving the realism of synthetic images through a combination of adversarial and perceptual losses

TLDR
This work proposes a novel method based on Generative Adversarial Networks (GANs) to improve the realism of the synthetic images while preserving the annotation information and describes how a perceptual loss can be utilized while introducing the basic features and techniques from adversarial networks for obtaining better results.

An Adversarial Training based Framework for Depth Domain Adaptation

TLDR
This paper uses a cyclic loss together with an adversarial loss to bring the two domains of synthetic and real depth images closer by translating synthetic images to real domain, and demonstrates the usefulness of synthetic images modified this way for training deep neural networks that can perform well on real images.

Learning image classifiers from (limited) real and (abundant) synthetic data

  • Computer Science
  • 2018
TLDR
This work proposes a two-part solution to learn models without real-world labels, and proposes the Mixed-Reality Generative Adversarial Network (MrGAN) which iteratively maps between synthetic and real data via a multi-stage, iterative process.

Generative Adversarial Networks to Synthetically Augment Data for Deep Learning based Image Segmentation

TLDR
For the medical segmentation task, it is shown that the GAN-based augmentation performs as well as standard data augmentation, and training on purely synthetic data outperforms previously reported results.

SPIGAN: PRIVILEGED ADVERSARIAL LEARNING

TLDR
This work proposes a new unsupervised domain adaptation algorithm, called SPIGAN, relying on Simulator Privileged Information (PI) and Generative Adversarial Networks (GAN), and uses internal data from the simulator as PI during the training of a target task network.

Synthetic IR Image Refinement Using Adversarial Learning With Bidirectional Mappings

TLDR
Qualitative, quantitative, and ablation study experiments demonstrate the superiority of the proposed Synthetic IR Refinement Generative Adversarial Network compared with the state-of-the-art techniques in terms of realism and fidelity.

Refining Synthetic Images with Semantic Layouts by Adversarial Training

TLDR
A new structure to improve synthetic images is put forward, via the reference to the idea of style transformation, through which it can efficiently reduce the distortion of pictures and minimize the need of real data annotation, which enables generation of highly realistic images.

An Image-based Generator Architecture for Synthetic Image Refinement

TLDR
These are alternative generator architectures for Boundary Equilibrium Generative Adversarial Networks, motivated by Learning from Simulated and Unsupervised Images through adversarial Training, that disentangles the need for a noise-based latent space and attempts to resolve the latent space's poorly understood properties.

Self-Supervised Feature Learning by Learning to Spot Artifacts

  • S. JenniP. Favaro
  • Computer Science
    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
TLDR
A novel self-supervised learning method based on adversarial training to train a discriminator network to distinguish real images from images with synthetic artifacts, and then to extract features from its intermediate layers that can be transferred to other data domains and tasks.

Unsupervised Reverse Domain Adaptation for Synthetic Medical Images via Adversarial Training

TLDR
This work proposes a novel framework that uses a reverse flow, where adversarial training is used to make real medical images more like synthetic images, and clinically-relevant features are preserved via self-regularization to improve structural similarity of endoscopy depth estimation.
...

References

SHOWING 1-10 OF 56 REFERENCES

Improved Techniques for Training GANs

TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.

Generating images with recurrent adversarial networks

TLDR
This work proposes a recurrent generative model that can be trained using adversarial training to generate very good image samples, and proposes a way to quantitatively compare adversarial networks by having the generators and discriminators of these networks compete against each other.

Generative Adversarial Nets

We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a

Generative Visual Manipulation on the Natural Image Manifold

TLDR
This paper proposes to learn the natural image manifold directly from data using a generative adversarial neural network, and defines a class of image editing operations, and constrain their output to lie on that learned manifold at all times.

SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient

TLDR
Modeling the data generator as a stochastic policy in reinforcement learning (RL), SeqGAN bypasses the generator differentiation problem by directly performing gradient policy update.

Coupled Generative Adversarial Networks

TLDR
This work proposes coupled generative adversarial network (CoGAN), which can learn a joint distribution without any tuple of corresponding images, and applies it to several joint distribution learning tasks, and demonstrates its applications to domain adaptation and image transformation.

Unsupervised Learning of Visual Structure using Predictive Generative Networks

TLDR
It is argued that prediction can serve as a powerful unsupervised loss for learning rich internal representations of high-level object features in deep neural networks trained using a CNN-LSTM-deCNN framework.

Play and Learn: Using Video Games to Train Computer Vision Models

TLDR
The results show that a convolutional network trained on synthetic data achieves a similar test error to a network that is trained on real-world data for dense image classification, and suggest that collaboration with game developers for an accessible interface to gather data is potentially a fruitful direction for future work in computer vision.

Learning Deep Object Detectors from 3D Models

TLDR
This work shows that augmenting the training data of contemporary Deep Convolutional Neural Net (DCNN) models with such synthetic data can be effective, especially when real training data is limited or not well matched to the target domain.

Augmenting Supervised Neural Networks with Unsupervised Objectives for Large-scale Image Classification

TLDR
This work demonstrates that the intermediate activations of pretrained large-scale classification networks preserve almost all the information of input images except a portion of local spatial details, and investigates joint supervised and unsupervised learning in a large- scale setting by augmenting existing neural networks with decoding pathways for reconstruction.
...