Corpus ID: 232155655

Dynamically Grown Generative Adversarial Networks

  title={Dynamically Grown Generative Adversarial Networks},
  author={Lanlan Liu and Yuting Zhang and Jia Deng and Stefano Soatto},
Recent work introduced progressive network growing as a promising way to ease the training for large GANs, but the model design and architecture-growing strategy still remain under-explored and needs manual design for different image data. In this paper, we propose a method to dynamically grow a GAN during training, optimizing the network architecture and its parameters together with automation. The method embeds architecture search techniques as an interleaving step with gradient-based… Expand

Figures and Tables from this paper


AGAN: Towards Automated Design of Generative Adversarial Networks
This paper presents the first Neural architecture search algorithm, automated neural architecture search for deep generative models, or AGAN for abbreviation, that is specifically suited for GAN training, and empirically demonstrates that the modules learned by AGAN are transferable to other image generation tasks such as STL-10. Expand
Progressive Growing of GANs for Improved Quality, Stability, and Variation
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality. Expand
AutoGAN: Neural Architecture Search for Generative Adversarial Networks
This paper presents the first preliminary study on introducing the NAS algorithm to generative adversarial networks (GANs), dubbed AutoGAN, and discovers architectures that achieve highly competitive performance compared to current state-of-the-art hand-crafted GANs. Expand
Consistency Regularization for Generative Adversarial Networks
This work proposes a simple, effective training stabilizer based on the notion of consistency regularization, which improves state-of-the-art FID scores for conditional generation and achieves the best F ID scores for unconditional image generation compared to other regularization methods on CIFAR-10 and CelebA. Expand
Self-Attention Generative Adversarial Networks
The proposed SAGAN achieves the state-of-the-art results, boosting the best published Inception score from 36.8 to 52.52 and reducing Frechet Inception distance from 27.62 to 18.65 on the challenging ImageNet dataset. Expand
Improved Training of Wasserstein GANs
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning. Expand
Improved Techniques for Training GANs
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes. Expand
BEGAN: Boundary Equilibrium Generative Adversarial Networks
This work proposes a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks, which provides a new approximate convergence measure, fast and stable training and high visual quality. Expand
Spectral Normalization for Generative Adversarial Networks
This paper proposes a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator and confirms that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques. Expand
Dist-GAN: An Improved GAN Using Distance Constraints
This system constrain the generator by an Autoencoder to consider the reconstructed samples from AE as “real” samples for the discriminator, effectively slowing down the convergence of discriminator and reducing gradient vanishing. Expand