Evolutionary Generative Adversarial Networks

@article{Wang2019EvolutionaryGA,
  title={Evolutionary Generative Adversarial Networks},
  author={Chaoyue Wang and Chang Xu and Xin Yao and Dacheng Tao},
  journal={IEEE Transactions on Evolutionary Computation},
  year={2019},
  volume={23},
  pages={921-934}
}
Generative adversarial networks (GANs) have been effective for learning generative models for real-world data. However, accompanied with the generative tasks becoming more and more challenging, existing GANs (GAN and its variants) tend to suffer from different training problems such as instability and mode collapse. In this paper, we propose a novel GAN framework called evolutionary GANs (E-GANs) for stable GAN training and improved generative performance. Unlike existing GANs, which employ a… 

Figures from this paper

Evolutionary Generative Adversarial Networks with Crossover Based Knowledge Distillation
TLDR
This paper proposes a general crossover operator, which can be widely applied to GANs using evolutionary strategies, and designs an evolutionary GAN framework named C-GAN based on it, and combines the crossover operator with evolutionary generative adversarial networks (E-GAN) to implement the evolutionaryGenerative Adversarial Networks with crossover (CE-GAN).
Attentive evolutionary generative adversarial network
TLDR
With the algorithm, the AEGAN overcomes the shortcomings of traditional GANs brought by single loss function and deep convolution and it greatly improves the training stability and statistical efficiency.
Neuroevolution of Generative Adversarial Networks
TLDR
The evolutionary pressure is used to guide the training of GANs to build robust models, leveraging the quality of results, and providing a more stable training, and these proposals can automatically provide useful architectural definitions, avoiding the manual discovery of suitable models for GAns.
CDE-GAN: Cooperative Dual Evolution-Based Generative Adversarial Network
TLDR
Extensive experiments demonstrate that the proposed CDE-GAN achieves the competitive and superior performance in generating good quality and diverse samples over baselines, and to improve generative performance.
Improved Evolutionary Generative Adversarial Networks
TLDR
An evolutionary GAN framework named improved evolutionary generative adversarial networks (IE-GAN) is designed and a universal crossover operator over knowledge distillation is proposed, which can be widely applied to evolutionary GAns and complement the missing crossover variation of E-GAN.
Spatial Coevolution for Generative Adversarial Network Training
TLDR
A system that combines spatial coevolution with gradient-based learning to improve the robustness and scalability of GAN training, and shows a GAN-training feature of Lipizzaner: the ability to train simultaneously with different loss functions in the gradient descent parameter learning framework of each GAN at each cell.
Generative modelling and adversarial learning
TLDR
This thesis aims to both improve the quality of generative modelling and manipulate generated samples by specifying multiple scene properties and devise a novel model, called a perceptual adversarial network (PAN), which consists of two feed-forward convolutional neural networks: a transformation network and a discriminative network.
Stabilizing Generative Adversarial Network Training: A Survey
TLDR
This survey summarizes the approaches and methods employed for the purpose of stabilizing GAN training procedure and discusses the advantages and disadvantages of each of the methods, offering a comparative summary of the literature on stabilizing gan training procedure.
Multi-objective evolutionary GAN
TLDR
A new algorithm is proposed, called Multi-Objective Evolutionary Generative Adversarial Network (MOEGAN), which reformulates the problem of training GANs as a multi-objective optimization problem, and Pareto dominance is used to select the best solutions.
DMGAN: DiscriminativeMetric-based Generative Adversarial Networks
TLDR
A novel model, called Discriminative Metric-based Generative Adversarial Networks (DMGANs), for generating real-like samples from the perspective of deep metric learning and a data-dependent strategy of weight adaption is proposed to further improve the quality of generated samples.
...
...

References

SHOWING 1-10 OF 88 REFERENCES
Multi-agent Diverse Generative Adversarial Networks
TLDR
MAD-GAN, an intuitive generalization to the Generative Adversarial Networks and its conditional variants to address the well known problem of mode collapse is proposed and its efficacy on the unsupervised feature representation task is shown.
Generative Multi-Adversarial Networks
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network}
MGAN: Training Generative Adversarial Nets with Multiple Generators
TLDR
A new approach to train the Generative Adversarial Nets with a mixture of generators to overcome the mode collapsing problem, and develops theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generator’ distributions and the empirical data distribution is minimal, whilst the JSD among generators' distributions is maximal, hence effectively avoiding the mode collapse problem.
Dual Discriminator Generative Adversarial Nets
TLDR
A novel approach to tackle the problem of mode collapse encountered in generative adversarial network (GAN), which combines the Kullback-Leibler (KL) and reverse KL divergences into a unified objective function, thus it exploits the complementary statistical properties from these Divergences to effectively diversify the estimated density in capturing multi-modes.
Generalization and equilibrium in generative adversarial nets (GANs) (invited talk)
Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good
Adversarial Discriminative Domain Adaptation
TLDR
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.
Least Squares Generative Adversarial Networks
TLDR
This paper proposes the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator, and shows that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence.
Generating Text via Adversarial Training
TLDR
A generic framework employing Long short-term Memory (LSTM) and convolutional neural network (CNN) for adversarial training to generate realistic text and it is demonstrated that the model can generate realistic sentence using adversarialTraining.
Improved Training of Wasserstein GANs
TLDR
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Gated-GAN: Adversarial Gated Networks for Multi-Collection Style Transfer
TLDR
This paper proposes adversarial gated networks (Gated-GAN) to transfer multiple styles in a single model and makes it possible to explore a new style by investigating styles learned from artists or genres.
...
...