Interpreting the Latent Space of Generative Adversarial Networks using Supervised Learning

  title={Interpreting the Latent Space of Generative Adversarial Networks using Supervised Learning},
  author={Toan Pham Van and Tam Minh Nguyen and Ngoc N. Tran and Hoai Viet Nguyen and Linh Doan Bao and Huy Dao Quang and Ta Minh Thanh},
  journal={2020 International Conference on Advanced Computing and Applications (ACOMP)},
With great progress in the development of Generative Adversarial Networks (GANs), in recent years, the quest for insights in understanding and manipulating the latent space of GAN has gained more and more attention due to its wide range of applications. While most of the researches on this task have focused on unsupervised learning method, which induces difficulties in training and limitation in results, our work approaches another direction, encoding human's prior knowledge to discover more… 

Figures and Tables from this paper



InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets

Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.

Introduction to PyTorch

In this chapter, the authors will cover PyTorch which is a more recent addition to the ecosystem of the deep learning framework and has fairly good Graphical Processing Unit (GPU) support and is a fast-maturing framework.

Can We Gain More from Orthogonality Regularizations in Training Deep CNNs?

This paper develops novel orthogonality regularizations on training deep CNNs, utilizing various advanced analytical tools such as mutual coherence and restricted isometry property to develop plug-and-play regularizations that can be conveniently incorporated into training almost any CNN without extra hassle.

Large Scale GAN Training for High Fidelity Natural Image Synthesis

It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.

A disciplined approach to neural network hyper-parameters: Part 1 - learning rate, batch size, momentum, and weight decay

This report shows how to examine the training validation/test loss function for subtle clues of underfitting and overfitting and suggests guidelines for moving toward the optimal balance point and discusses how to increase/decrease the learning rate/momentum to speed up training.

Progressive Growing of GANs for Improved Quality, Stability, and Variation

A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.

Towards the Automatic Anime Characters Creation with Generative Adversarial Networks

This work explores the training of GAN models specialized on an anime facial image dataset and addresses the issue from both the data and the model aspect, by collecting a more clean, well-suited dataset and leveraging proper, empirical application of DRAGAN.

BEGAN: Boundary Equilibrium Generative Adversarial Networks

This work proposes a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks, which provides a new approximate convergence measure, fast and stable training and high visual quality.

Face aging with conditional generative adversarial networks

This work proposes the first GAN-based method for automatic face aging and introduces a novel approach for “Identity-Preserving” optimization of GAN's latent vectors.

Towards Principled Methods for Training Generative Adversarial Networks

The goal of this paper is to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks, and performs targeted experiments to substantiate the theoretical analysis and verify assumptions, illustrate claims, and quantify the phenomena.