Corpus ID: 81977247

On catastrophic forgetting in Generative Adversarial Networks

@article{ThanhTung2018OnCF,
  title={On catastrophic forgetting in Generative Adversarial Networks},
  author={Hoang Thanh-Tung and T. Tran},
  journal={arXiv: Learning},
  year={2018}
}
We view the training of Generative Adversarial Networks (GANs) as a continual learning problem. The sequence of generated distributions is considered as the sequence of tasks to the discriminator. We show that catastrophic forgetting is present in GANs and how it can make the training of GANs non-convergent. We then provide a theoretical analysis of the problem. To prevent catastrophic forgetting, we propose a way to adapt continual learning techniques to GANs. Our method is orthogonal to… Expand
2 Citations
Comparative Analysis of Catastrophic Forgetting in Metric Learning
Group Equivariant Generative Adversarial Networks
  • 3
  • PDF

References

SHOWING 1-10 OF 26 REFERENCES
Generative Adversarial Network Training is a Continual Learning Problem
  • 31
  • PDF
Improved Training of Wasserstein GANs
  • 4,205
  • PDF
Continual Learning in Generative Adversarial Nets
  • 59
  • PDF
Improving Generalization and Stability of Generative Adversarial Networks
  • 65
  • PDF
Generalization and equilibrium in generative adversarial nets (GANs) (invited talk)
  • 409
  • PDF
Connecting Generative Adversarial Networks and Actor-Critic Methods
  • 114
  • PDF
Wasserstein Generative Adversarial Networks
  • 2,559
  • PDF
Generative Adversarial Nets
  • 22,200
  • PDF
The Numerics of GANs
  • 271
  • PDF
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
  • 7,805
  • PDF
...
1
2
3
...