Corpus ID: 236447844

Improving ClusterGAN Using Self-AugmentedInformation Maximization of Disentangling LatentSpaces

@article{Dam2021ImprovingCU,
  title={Improving ClusterGAN Using Self-AugmentedInformation Maximization of Disentangling LatentSpaces},
  author={Tanmoy Dam and Sreenatha G. Anavatti and Hussein A. Abbass},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.12706}
}
I. ABSTRACT The Latent Space Clustering in Generative Adversarial Networks (ClusterGAN) method has been successful with high dimensional data. However, the method assumes uniformly distributed priors during the generation of modes, which is a restrictive assumption in real-world data and cause loss of diversity in the generated modes. In this paper, we propose self-augmentation information maximization improved ClusterGAN (SIMI-ClusterGAN) to learn the distinctive priors from the data. The… Expand

References

SHOWING 1-10 OF 53 REFERENCES
ClusterGAN : Latent Space Clustering in Generative Adversarial Networks
TLDR
The results show a remarkable phenomenon that GANs can preserve latent space interpolation across categories, even though the discriminator is never exposed to such vectors. Expand
Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative Entropy Minimization
TLDR
A new clustering model, called DEeP Embedded Regularized ClusTering (DEPICT), which efficiently maps data into a discriminative embedding subspace and precisely predicts cluster assignments is proposed, which indicates the superiority and faster running time of DEPICT in real-world clustering tasks, where no labeled data is available for hyper-parameter tuning. Expand
Inverting the Generator of a Generative Adversarial Network
TLDR
This paper introduces a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN, and demonstrates how the proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. Expand
Deep Spectral Clustering Using Dual Autoencoder Network
TLDR
A joint learning framework for discriminative embedding and spectral clustering is proposed, which can significantly outperform state-of-the-art clustering approaches and be more robust to noise. Expand
Towards K-means-friendly Spaces: Simultaneous Deep Learning and Clustering
TLDR
A joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN) while exploiting theDeep neural network's ability to approximate any nonlinear function is proposed. Expand
Y-Autoencoders: disentangling latent representations via sequential-encoding
TLDR
A new model called Y-Autoencoder (Y-AE) is introduced, which provides significant experimental results on various domains, such as separation of style and content, image-to-image translation, and inverse graphics. Expand
DeLiGAN: Generative Adversarial Networks for Diverse and Limited Data
TLDR
The proposed DeLiGAN can generate images of handwritten digits, objects and hand-drawn sketches, all using limited amounts of data, and introduces a modified version of inception-score, a measure which has been found to correlate well with human assessment of generated samples. Expand
Adversarially Regularized Autoencoders
TLDR
This work proposes a flexible method for training deep latent variable models of discrete structures based on the recently-proposed Wasserstein autoencoder (WAE), and shows that the latent representation can be trained to perform unaligned textual style transfer, giving improvements both in automatic/human evaluation compared to existing methods. Expand
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
TLDR
Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods. Expand
Isolating Sources of Disentanglement in Variational Autoencoders
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our $\beta$-TCVAE (Total CorrelationExpand
...
1
2
3
4
5
...