Extremely Weak Supervised Image-to-Image Translation for Semantic Segmentation

@article{Shukla2019ExtremelyWS,
  title={Extremely Weak Supervised Image-to-Image Translation for Semantic Segmentation},
  author={Samarth Shukla and Luc Van Gool and Radu Timofte},
  journal={2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)},
  year={2019},
  pages={3368-3377}
}
Recent advances in generative models and adversarial training have led to a flourishing image-to-image (I2I) translation literature. The current I2I translation approaches require training images from the two domains that are either all paired (supervised) or all unpaired (unsupervised). In practice, obtaining paired training data in sufficient quantities is often very costly and cumbersome. Therefore solutions that employ unpaired data, while less accurate, are largely preferred. In this paper… Expand
Zero-Pair Image to Image Translation using Domain Conditional Normalization
TLDR
This paper employs a single generator which has an encoder-decoder structure and analyzes different implementations of domain conditional normalization to obtain the desired target domain output to improve in qualitative and quantitative terms over the compared methods, while using much fewer parameters. Expand
DG-Font: Deformable Generative Networks for Unsupervised Font Generation
TLDR
A feature deformation skip connection (FDSC) is introduced which predicts pairs of displacement maps and employs the predicted maps to apply deformable convolution to the low-level feature maps from the content encoder. Expand

References

SHOWING 1-10 OF 37 REFERENCES
Learning image-to-image translation using paired and unpaired training samples
TLDR
This work proposes a new general purpose image-to-image translation model that is able to utilize both paired and unpaired training data simultaneously, and is the first work to consider such hybrid setup in image- to- image translation. Expand
Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks
TLDR
This work presents an approach for learning to translate an image from a source domain X to a target domain Y in the absence of paired examples, and introduces a cycle consistency loss to push F(G(X)) ≈ X (and vice versa). Expand
Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks
TLDR
This generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain, and outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Expand
DualGAN: Unsupervised Dual Learning for Image-to-Image Translation
TLDR
A novel dual-GAN mechanism is developed, which enables image translators to be trained from two sets of unlabeled images from two domains, and can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data. Expand
Image-to-Image Translation with Conditional Adversarial Networks
TLDR
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Expand
Toward Multimodal Image-to-Image Translation
TLDR
This work aims to model a distribution of possible outputs in a conditional generative modeling setting that helps prevent a many-to-one mapping from the latent code to the output during training, also known as the problem of mode collapse. Expand
Semantic Image Inpainting with Deep Generative Models
TLDR
A novel method for semantic image inpainting, which generates the missing content by conditioning on the available data, and successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods. Expand
Learning from Simulated and Unsupervised Images through Adversarial Training
TLDR
This work develops a method for S+U learning that uses an adversarial network similar to Generative Adversarial Networks (GANs), but with synthetic images as inputs instead of random vectors, and makes several key modifications to the standard GAN algorithm to preserve annotations, avoid artifacts, and stabilize training. Expand
Context Encoders: Feature Learning by Inpainting
TLDR
It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods. Expand
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
TLDR
A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing. Expand
...
1
2
3
4
...