Omni-GAN: On the Secrets of cGANs and Beyond

@article{Zhou2021OmniGANOT,
  title={Omni-GAN: On the Secrets of cGANs and Beyond},
  author={Peng Zhou and Lingxi Xie and Bingbing Ni and Qi Tian},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={14041-14051}
}
  • P. Zhou, Lingxi Xie, Qi Tian
  • Published 26 November 2020
  • Computer Science
  • 2021 IEEE/CVF International Conference on Computer Vision (ICCV)
The conditional generative adversarial network (cGAN) is a powerful tool of generating high-quality images, but existing approaches mostly suffer unsatisfying performance or the risk of mode collapse. This paper presents Omni-GAN, a variant of cGAN that reveals the devil in designing a proper discriminator for training the model. The key is to ensure that the discriminator receives strong supervision to perceive the concepts and moderate regularization to avoid collapse. Omni-GAN is easily… 
Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training
TLDR
This paper identifies that gradient exploding in the classifier can cause an undesirable collapse in early training, and projects input vectors onto a unit hypersphere can resolve the problem, and proposes the Data-to-Data Cross-Entropy loss (D2D-CE) to exploit relational information in theclass-labeled dataset.
Tail-GAN: Nonparametric Scenario Generation for Tail Risk Estimation
TLDR
This work designs a Generative Adversarial Network architecture capable of learning to simulate price scenarios that preserve tail risk features for these benchmark trading strategies, leading to consistent estimators for their Value-at-Risk and Expected Shortfall.
Conditional GANs with Auxiliary Discriminative Classifier
TLDR
A novel conditional GAN with auxiliary discriminative classifier (ADC-GAN) that can faithfully replicate the target distribution even without the original discriminator, and is robust to the value of coefficient hyper-parameter and the selection of GAN loss, and being stable during the training process.
DGL-GAN: Discriminator Guided Learning for GAN Compression
TLDR
A novel yet simple Discriminator Guided Learning approach for compressing vanilla GAN, dubbed DGL-GAN, which is valid since empirically, learning from the teacher discriminator could facilitate the performance of student GANs and achieves state-of-the-art results.
Manifold Learning Benefits GANs
TLDR
This paper improves Generative Adversarial Networks by incorporating a manifold learning step into the discriminator, and concludes that locality-constrained non-linear manifolds have the upper hand over linear manifolds due to their non-uniform density and smoothness.
Multiclass Image Classification Using GANs and CNN Based on Holes Drilled in Laminated Chipboard
TLDR
The aim of the research was to create a model capable of identifying different levels of quality of the holes, where the reduced quality would serve as a warning that the drill is about to wear down, which could reduce the damage caused by a blunt tool.
cGANs with Auxiliary Discriminative Classifier
TLDR
It is pointed out that the fundamental reason is that the classifier of AC-GAN is generator-agnostic, and thus cannot provide informative guidance to the generator to approximate the target joint distribution, leading to a minimization of conditional entropy that decreases the intra-class diversity.

References

SHOWING 1-10 OF 91 REFERENCES
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.
cGANs with Projection Discriminator
TLDR
With this modification, the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset is significantly improved and the application to super-resolution was extended and succeeded in producing highly discriminative super- resolution images.
Progressive Growing of GANs for Improved Quality, Stability, and Variation
TLDR
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.
A Style-Based Generator Architecture for Generative Adversarial Networks
TLDR
An alternative generator architecture for generative adversarial networks is proposed, borrowing from style transfer literature, that improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation.
Adversarial Discriminative Domain Adaptation
TLDR
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.
Large Scale GAN Training for High Fidelity Natural Image Synthesis
TLDR
It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.
Spectral Normalization for Generative Adversarial Networks
TLDR
This paper proposes a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator and confirms that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques.
Adversarial-Learned Loss for Domain Adaptation
TLDR
The confusion matrix is introduced, which is learned through an adversarial manner in ALDA, to reduce the gap and align the feature distributions and outperforms state-of-the-art approaches in four standard domain adaptation datasets.
Feature Quantization Improves GAN Training
TLDR
Extensive experimental results show that the proposed FQ-GAN can improve the FID scores of baseline methods by a large margin on a variety of tasks, achieving new state-of-the-art performance.
Invertible Conditional GANs for image editing
TLDR
This work evaluates encoders to inverse the mapping of a cGAN, i.e., mapping a real image into a latent space and a conditional representation, which allows to reconstruct and modify real images of faces conditioning on arbitrary attributes.
...
1
2
3
4
5
...