@article{Wang2019EvolutionaryGA,
author={Chaoyue Wang and Chang Xu and Xin Yao and Dacheng Tao},
journal={IEEE Transactions on Evolutionary Computation},
year={2019},
volume={23},
pages={921-934}
}
• Published 1 March 2018
• Computer Science
• IEEE Transactions on Evolutionary Computation
Generative adversarial networks (GANs) have been effective for learning generative models for real-world data. However, accompanied with the generative tasks becoming more and more challenging, existing GANs (GAN and its variants) tend to suffer from different training problems such as instability and mode collapse. In this paper, we propose a novel GAN framework called evolutionary GANs (E-GANs) for stable GAN training and improved generative performance. Unlike existing GANs, which employ a…
146 Citations

## Figures from this paper

Evolutionary Generative Adversarial Networks with Crossover Based Knowledge Distillation
• Computer Science
2021 International Joint Conference on Neural Networks (IJCNN)
• 2021
This paper proposes a general crossover operator, which can be widely applied to GANs using evolutionary strategies, and designs an evolutionary GAN framework named C-GAN based on it, and combines the crossover operator with evolutionary generative adversarial networks (E-GAN) to implement the evolutionaryGenerative Adversarial Networks with crossover (CE-GAN).
• Computer Science
Appl. Intell.
• 2021
With the algorithm, the AEGAN overcomes the shortcomings of traditional GANs brought by single loss function and deep convolution and it greatly improves the training stability and statistical efficiency.
• Computer Science
Deep Neural Evolution
• 2020
The evolutionary pressure is used to guide the training of GANs to build robust models, leveraging the quality of results, and providing a more stable training, and these proposals can automatically provide useful architectural definitions, avoiding the manual discovery of suitable models for GAns.
CDE-GAN: Cooperative Dual Evolution-Based Generative Adversarial Network
• Computer Science
IEEE Transactions on Evolutionary Computation
• 2021
Extensive experiments demonstrate that the proposed CDE-GAN achieves the competitive and superior performance in generating good quality and diverse samples over baselines, and to improve generative performance.
• Computer Science
• 2021
An evolutionary GAN framework named improved evolutionary generative adversarial networks (IE-GAN) is designed and a universal crossover operator over knowledge distillation is proposed, which can be widely applied to evolutionary GAns and complement the missing crossover variation of E-GAN.
Spatial Coevolution for Generative Adversarial Network Training
• Computer Science
ACM Trans. Evol. Learn. Optim.
• 2021
A system that combines spatial coevolution with gradient-based learning to improve the robustness and scalability of GAN training, and shows a GAN-training feature of Lipizzaner: the ability to train simultaneously with different loss functions in the gradient descent parameter learning framework of each GAN at each cell.
This thesis aims to both improve the quality of generative modelling and manipulate generated samples by specifying multiple scene properties and devise a novel model, called a perceptual adversarial network (PAN), which consists of two feed-forward convolutional neural networks: a transformation network and a discriminative network.
Stabilizing Generative Adversarial Network Training: A Survey
• Computer Science
ArXiv
• 2019
This survey summarizes the approaches and methods employed for the purpose of stabilizing GAN training procedure and discusses the advantages and disadvantages of each of the methods, offering a comparative summary of the literature on stabilizing gan training procedure.
Multi-objective evolutionary GAN
• Computer Science
GECCO Companion
• 2020
A new algorithm is proposed, called Multi-Objective Evolutionary Generative Adversarial Network (MOEGAN), which reformulates the problem of training GANs as a multi-objective optimization problem, and Pareto dominance is used to select the best solutions.
• Computer Science
• 2020
A novel model, called Discriminative Metric-based Generative Adversarial Networks (DMGANs), for generating real-like samples from the perspective of deep metric learning and a data-dependent strategy of weight adaption is proposed to further improve the quality of generated samples.

## References

SHOWING 1-10 OF 88 REFERENCES
• Computer Science
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
• 2018
MAD-GAN, an intuitive generalization to the Generative Adversarial Networks and its conditional variants to address the well known problem of mode collapse is proposed and its efficacy on the unsupervised feature representation task is shown.
• Computer Science
ICLR
• 2017
Generative adversarial networks (GANs) are a framework for producing a generative model by way of a two-player minimax game. In this paper, we propose the \emph{Generative Multi-Adversarial Network}
MGAN: Training Generative Adversarial Nets with Multiple Generators
• Computer Science
ICLR
• 2018
A new approach to train the Generative Adversarial Nets with a mixture of generators to overcome the mode collapsing problem, and develops theoretical analysis to prove that, at the equilibrium, the Jensen-Shannon divergence (JSD) between the mixture of generator’ distributions and the empirical data distribution is minimal, whilst the JSD among generators' distributions is maximal, hence effectively avoiding the mode collapse problem.
• Computer Science
NIPS
• 2017
A novel approach to tackle the problem of mode collapse encountered in generative adversarial network (GAN), which combines the Kullback-Leibler (KL) and reverse KL divergences into a unified objective function, thus it exploits the complementary statistical properties from these Divergences to effectively diversify the estimated density in capturing multi-modes.
Generalization and equilibrium in generative adversarial nets (GANs) (invited talk)
• Computer Science
ICML
• 2017
Generative Adversarial Networks (GANs) have become one of the dominant methods for fitting generative models to complicated real-life data, and even found unusual uses such as designing good
• Computer Science
2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
• 2017
It is shown that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and the promise of the approach is demonstrated by exceeding state-of-the-art unsupervised adaptation results on standard domain adaptation tasks as well as a difficult cross-modality object classification task.
• Computer Science
2017 IEEE International Conference on Computer Vision (ICCV)
• 2017
This paper proposes the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator, and shows that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence.
• Computer Science
• 2016
A generic framework employing Long short-term Memory (LSTM) and convolutional neural network (CNN) for adversarial training to generate realistic text and it is demonstrated that the model can generate realistic sentence using adversarialTraining.
Improved Training of Wasserstein GANs
• Computer Science
NIPS
• 2017
This work proposes an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input, which performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning.
Gated-GAN: Adversarial Gated Networks for Multi-Collection Style Transfer
• Xinyuan Chen, Chang Xu
• Computer Science
IEEE Transactions on Image Processing
• 2019
This paper proposes adversarial gated networks (Gated-GAN) to transfer multiple styles in a single model and makes it possible to explore a new style by investigating styles learned from artists or genres.