A Style-Based Generator Architecture for Generative Adversarial Networks

@article{Karras2019ASG,
  title={A Style-Based Generator Architecture for Generative Adversarial Networks},
  author={Tero Karras and Samuli Laine and Timo Aila},
  journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2019},
  pages={4396-4405}
}
  • Tero Karras, S. Laine, Timo Aila
  • Published 12 December 2018
  • Computer Science, Mathematics
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. [...] Key Method To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.Expand
A StyleGAN-2 inspired Generative Adversarial Network for the PCA-controllable generation of drums samples for content-based retrieval
Generative Adversarial Networks (GAN) have proven incredibly effective at the task of generating highly realistic natural images. On top of this, approaches for the conditioning of the generation
An Image-based Generator Architecture for Synthetic Image Refinement
TLDR
These are alternative generator architectures for Boundary Equilibrium Generative Adversarial Networks, motivated by Learning from Simulated and Unsupervised Images through adversarial Training, that disentangles the need for a noise-based latent space and attempts to resolve the latent space's poorly understood properties.
Unsupervised K-modal styled content generation
The emergence of deep generative models has recently enabled the automatic generation of massive amounts of graphical content, both in 2D and in 3D. Generative Adversarial Networks (GANs) and style
Unsupervised multi-modal Styled Content Generation
TLDR
This paper introduces UMMGAN, a novel architecture designed to better model multi-modal distributions, in an unsupervised fashion, and demonstrates that this approach is capable of effectively approximating a complex distribution as a superposition of multiple simple ones.
Mask-Guided Discovery of Semantic Manifolds in Generative Models
TLDR
This work presents a method to explore the manifolds of changes of spatially localized regions of the face and discovers smoothly varying sequences of latent vectors along these manifolds suitable for creating animations.
Cluster-guided Image Synthesis with Unconditional Models
TLDR
This work focuses on controllable image generation by leveraging GANs that are well-trained in an unsupervised fashion and discovers that the representation space of intermediate layers of the generator forms a number of clusters that separate the data according to semantically meaningful attributes.
Unsupervised Controllable Generation with Self-Training
TLDR
This work proposes an unsupervised framework to learn a distribution of latent codes that control the generator through self-training, and exhibits better disentanglement compared to other variants such as the variational autoencoder, and is able to discover semantically meaningful latent codes without any supervision.
Self-Attention Generative Adversarial Networks
TLDR
The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset.
Autoencoding Generative Adversarial Networks
TLDR
The Autoencoding Generative Adversarial Network (AEGAN), a four-network model which learns a bijective mapping between a specified latent space and a given sample space by applying an adversarial loss and a reconstruction loss to both the generated images and the generated latent vectors is proposed.
OpenGAN: Open Set Generative Adversarial Networks
TLDR
This work proposes an open set GAN architecture that is conditioned per-input sample with a feature embedding drawn from a metric space, and shows that classifier performance can be significantly improved by augmenting the training data with OpenGAN samples on classes that are outside of the GAN training distribution.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 68 REFERENCES
Self-Attention Generative Adversarial Networks
TLDR
The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset.
Generative Multi-Adversarial Networks
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network}
Gaussian Mixture Generative Adversarial Networks for Diverse Datasets, and the Unsupervised Clustering of Images
TLDR
Gaussian Mixture GAN (GM-GAN) is proposed, a variant of the basic GAN model, where the probability distribution over the latent space is a mixture of Gaussians, and a feature is demonstrated which further sets this model apart from other GAN models: the option to control the quality-diversity trade-off by altering, post-training, the likelihood distribution of the latentspace.
Generative adversarial interpolative autoencoding: adversarial training on latent space interpolations encourage convex latent distributions
TLDR
A neural network architecture based upon the Autoencoder and Generative Adversarial Network that promotes a convex latent distribution by training adversarially on latent space interpolations to preserve realistic resemblances to the network inputs is presented.
On Self Modulation for Generative Adversarial Networks
TLDR
This work proposes and study an architectural modification, self-modulation, which improves GAN performance across different data sets, architectures, losses, regularizers, and hyperparameter settings.
Online Adaptative Curriculum Learning for GANs
TLDR
Experimental results show that the proposed novel framework for training the generator against an ensemble of discriminator networks improves samples quality and diversity over existing baselines by effectively learning a curriculum and supports the claim that weaker discriminators have higher entropy improving modes coverage.
Progressive Growing of GANs for Improved Quality, Stability, and Variation
TLDR
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Adversarial Feature Learning
TLDR
Bidirectional Generative Adversarial Networks are proposed as a means of learning the inverse mapping of GANs, and it is demonstrated that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.
Dropout-GAN: Learning from a Dynamic Ensemble of Discriminators
TLDR
This work proposes to incorporate adversarial dropout in generative multi-adversarial networks, by omitting or dropping out, the feedback of each discriminator in the framework with some probability at the end of each batch, and shows that this leads to a more generalized generator.
...
1
2
3
4
5
...