Image-to-Image Translation with Conditional Adversarial Networks

@article{Isola2017ImagetoImageTW,
  title={Image-to-Image Translation with Conditional Adversarial Networks},
  author={Phillip Isola and Jun-Yan Zhu and Tinghui Zhou and Alexei A. Efros},
  journal={2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2017},
  pages={5967-5976}
}
We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. [...] Key Result As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without handengineering our loss functions either.Expand
Unsupervised Image-to-Image Translation with Generative Adversarial Networks
TLDR
This work develops a two step (unsupervised) learning method to translate images between different domains by using unlabeled images without specifying any correspondence between them, so that to avoid the cost of acquiring labeled data. Expand
Perceptual Adversarial Networks With a Feature Pyramid for Image Translation
TLDR
This paper decomposes the image into a set of images by a feature pyramid and elaborate separate loss components for images of specific bandpass and finds the overall perceptual adversarial loss is able to capture not only the semantic features but also the appearance. Expand
Image-to-image translation using a relativistic generative adversarial network
TLDR
An improved image-to-image translation using a relativistic generative adversarial network which is easy to converge and aims to find the fake data from the real directly, but to detect the faker ones. Expand
Content and Colour Distillation for Learning Image Translations with the Spatial Profile Loss
TLDR
This paper proposes a novel method of computing the loss directly between the source and target images that enable proper distillation of shape/content and colour/style and shows that this is useful in typical image-to-image translations. Expand
Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks
TLDR
This work presents an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples, and introduces a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Expand
Image-to-Image Translation Using Generative Adversarial Network
  • Kusam Lata, M. Dave, K. Nishanth
  • Computer Science
  • 2019 3rd International conference on Electronics, Communication and Aerospace Technology (ICECA)
  • 2019
TLDR
Conditional GANs are used which translates the images based upon some conditions and the performance is also analyzed of the model by doing hyper-parameter tuning. Expand
In2I: Unsupervised Multi-Image-to-Image Translation Using Generative Adversarial Networks
TLDR
This paper introduces a Generative Adversarial Network (GAN) based framework along with a multi-modal generator structure and a new loss term, latent consistency loss, and shows that leveraging multiple inputs generally improves the visual quality of the translated images. Expand
Equivariant Adversarial Network for Image-to-image Translation
Image-to-Image translation aims to learn an image from a source domain to a target domain. However, there are three main challenges, such as lack of paired datasets, multimodality, and diversity,Expand
An Input-Perceptual Reconstruction Adversarial Network for Paired Image-to-Image Conversion
TLDR
This work proposes a novel Input-Perceptual and Reconstruction Adversarial Network (IP-RAN) as an all-purpose framework for imperfect paired image-to-image conversion problems and demonstrates, through the experimental results, that this method significantly outperforms the current state-of-the-art techniques. Expand
Toward Learning a Unified Many-to-Many Mapping for Diverse Image Translation
TLDR
A novel generative adversarial network (GAN) based model, InjectionGAN, is proposed, to learn a many-to-many mapping that has high quality for the challenging image- to-image translation tasks where no pairing information of the training dataset exits. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 65 REFERENCES
Generative Image Modeling Using Style and Structure Adversarial Networks
TLDR
This paper factorize the image generation process and proposes Style and Structure Generative Adversarial Network, a model that is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations. Expand
Generative Visual Manipulation on the Natural Image Manifold
TLDR
This paper proposes to learn the natural image manifold directly from data using a generative adversarial neural network, and defines a class of image editing operations, and constrain their output to lie on that learned manifold at all times. Expand
Learning to Generate Images of Outdoor Scenes from Attributes and Semantic Layouts
TLDR
A novel deep conditional generative adversarial network architecture that takes its strength from the semantic layout and scene attributes integrated as conditioning variables and is able to generate realistic outdoor scene images under different conditions, e.g. day-night, sunny-foggy, with clear object boundaries. Expand
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
  • C. Ledig, Lucas Theis, +6 authors W. Shi
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss. Expand
Learning What and Where to Draw
TLDR
This work proposes a new model, the Generative Adversarial What-Where Network (GAWWN), that synthesizes images given instructions describing what content to draw in which location, and shows high-quality 128 x 128 image synthesis on the Caltech-UCSD Birds dataset. Expand
Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks
TLDR
A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. Expand
Generative Adversarial Text to Image Synthesis
TLDR
A novel deep architecture and GAN formulation is developed to effectively bridge advances in text and image modeling, translating visual concepts from characters to pixels. Expand
Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks
TLDR
Markovian Generative Adversarial Networks (MGANs) are proposed, a method for training generative networks for efficient texture synthesis that surpasses previous neural texture synthesizers by a significant margin and applies to texture synthesis, style transfer, and video stylization. Expand
Conditional generative adversarial nets for convolutional face generation
We apply an extension of generative adversarial networks (GANs) [8] to a conditional setting. In the GAN framework, a “generator” network is tasked with fooling a “discriminator” network intoExpand
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes. Expand
...
1
2
3
4
5
...