• Corpus ID: 3568073

Progressive Growing of GANs for Improved Quality, Stability, and Variation

@article{Karras2017ProgressiveGO,
  title={Progressive Growing of GANs for Improved Quality, Stability, and Variation},
  author={Tero Karras and Timo Aila and Samuli Laine and Jaakko Lehtinen},
  journal={ArXiv},
  year={2017},
  volume={abs/1710.10196}
}
We describe a new training methodology for generative adversarial networks. [] Key Method We also propose a simple way to increase the variation in generated images, and achieve a record inception score of 8.80 in unsupervised CIFAR10. Additionally, we describe several implementation details that are important for discouraging unhealthy competition between the generator and discriminator. Finally, we suggest a new metric for evaluating GAN results, both in terms of image quality and variation. As an…

Progressive Augmentation of GANs

The proposed progressive augmentation of GANs (PA-GAN) preserves the original GAN objective, does not compromise the discriminator's optimality and encourages a healthy competition between the generator and discriminator, leading to the better-performing generator.

Dual Contrastive Loss and Attention for GANs

A novel dual contrastive loss is proposed and it is shown that, with this loss, discriminator learns more generalized and distinguishable representations to incentivize generation to further push the boundaries in image generation.

PA-GAN: Improving GAN Training by Progressive Augmentation

The proposed progressive augmentation of GANs (PA-GAN) preserves the original GAN objective, does not compromise the discriminator's optimality and encourages a healthy competition between the generator and discriminator, leading to the better-performing generator.

MSG-GAN: Multi-Scale Gradients for Generative Adversarial Networks

  • Animesh KarnewarO. Wang
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
The Multi-Scale Gradient Generative Adversarial Network (MSG-GAN), a simple but effective technique for addressing instability in GANs by allowing the flow of gradients from the discriminator to the generator at multiple scales, is proposed.

Projected GANs Converge Faster

This work projects generated and real samples into a fixed, pretrained feature space and proposes a more effective strategy that mixes features across channels and resolutions, which improves image quality, sample efficiency, and convergence speed.

Deshufflegan: A Self-Supervised Gan to Improve Structure Learning

This work proposes the DeshuffleGAN to enhance the learning of the discriminator and the generator, via a self-supervision approach, and introduces a deshuffling task that solves a puzzle of randomly shuffled image tiles, which helps the DesHuffleGAN learn to increase its expressive capacity for spatial structure and realistic appearance.

Large Scale GAN Training for High Fidelity Natural Image Synthesis

It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.

Stability and Diversity in Generative Adversarial Networks

  • Yahya DoganH. Keles
  • Computer Science
    2019 27th Signal Processing and Communications Applications Conference (SIU)
  • 2019
This study empirically examined the state-of-the-art cost functions, regularization techniques and network architectures that have recently been proposed to deal with the problems of stability and diversity in GANs, using CelebA dataset.

MSG-GAN: Multi-Scale Gradient GAN for Stable Image Synthesis

This work proposes the Multi-Scale Gradient Generative Adversarial Network (MSG-GAN), a simple but effective technique for addressing this problem which allows the flow of gradients from the discriminator to the generator at multiple scales.

Dynamically Grown Generative Adversarial Networks

A method to dynamically grow a GAN during training, optimizing the network architecture and its parameters together with automation and providing constructive insights into the GAN model design such as generator-discriminator balance and convolutional layer choices is proposed.
...

References

SHOWING 1-10 OF 59 REFERENCES

Improved Techniques for Training GANs

This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.

Conditional Image Synthesis with Auxiliary Classifier GANs

A variant of GANs employing label conditioning that results in 128 x 128 resolution image samples exhibiting global coherence is constructed and it is demonstrated that high resolution samples provide class information not present in low resolution samples.

High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.

Improved Training of Wasserstein GANs

This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.

Improving Generative Adversarial Networks with Denoising Feature Matching

We propose an augmented training procedure for generative adversarial networks designed to address shortcomings of the original by directing the generator towards probable configurations of abstract

BEGAN: Boundary Equilibrium Generative Adversarial Networks

This work proposes a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks, which provides a new approximate convergence measure, fast and stable training and high visual quality.

Megapixel Size Image Creation using Generative Adversarial Networks

This work presents an optimized image generation process based on a Deep Convolutional Generative Adversarial Networks (DCGANs) in order to create photorealistic high-resolution images (up to 1024x1024 pixels).

Energy-based Generative Adversarial Network

We introduce the "Energy-based Generative Adversarial Network" model (EBGAN) which views the discriminator as an energy function that attributes low energies to the regions near the data manifold and

Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network

SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss.

Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks

A generative parametric model capable of producing high quality samples of natural images using a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion.
...