Corpus ID: 236447844

# Improving ClusterGAN Using Self-AugmentedInformation Maximization of Disentangling LatentSpaces

@article{Dam2021ImprovingCU,
title={Improving ClusterGAN Using Self-AugmentedInformation Maximization of Disentangling LatentSpaces},
author={Tanmoy Dam and Sreenatha G. Anavatti and Hussein A. Abbass},
journal={ArXiv},
year={2021},
volume={abs/2107.12706}
}
• Published 2021
• Computer Science
• ArXiv
I. ABSTRACT The Latent Space Clustering in Generative Adversarial Networks (ClusterGAN) method has been successful with high dimensional data. However, the method assumes uniformly distributed priors during the generation of modes, which is a restrictive assumption in real-world data and cause loss of diversity in the generated modes. In this paper, we propose self-augmentation information maximization improved ClusterGAN (SIMI-ClusterGAN) to learn the distinctive priors from the data. The… Expand

#### References

SHOWING 1-10 OF 53 REFERENCES
ClusterGAN : Latent Space Clustering in Generative Adversarial Networks
• Computer Science, Mathematics
• AAAI
• 2019
The results show a remarkable phenomenon that GANs can preserve latent space interpolation across categories, even though the discriminator is never exposed to such vectors. Expand
Deep Clustering via Joint Convolutional Autoencoder Embedding and Relative Entropy Minimization
• Mathematics, Computer Science
• 2017 IEEE International Conference on Computer Vision (ICCV)
• 2017
A new clustering model, called DEeP Embedded Regularized ClusTering (DEPICT), which efficiently maps data into a discriminative embedding subspace and precisely predicts cluster assignments is proposed, which indicates the superiority and faster running time of DEPICT in real-world clustering tasks, where no labeled data is available for hyper-parameter tuning. Expand
Inverting the Generator of a Generative Adversarial Network
• Computer Science, Medicine
• IEEE Transactions on Neural Networks and Learning Systems
• 2019
This paper introduces a technique, inversion, to project data samples, specifically images, to the latent space using a pretrained GAN, and demonstrates how the proposed inversion technique may be used to quantitatively compare the performance of various GAN models trained on three image data sets. Expand
Deep Spectral Clustering Using Dual Autoencoder Network
• Xu Yang
• Computer Science, Mathematics
• 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2019
A joint learning framework for discriminative embedding and spectral clustering is proposed, which can significantly outperform state-of-the-art clustering approaches and be more robust to noise. Expand
Towards K-means-friendly Spaces: Simultaneous Deep Learning and Clustering
• Computer Science
• ICML
• 2017
A joint DR and K-means clustering approach in which DR is accomplished via learning a deep neural network (DNN) while exploiting theDeep neural network's ability to approximate any nonlinear function is proposed. Expand
Y-Autoencoders: disentangling latent representations via sequential-encoding
• Computer Science
• Pattern Recognit. Lett.
• 2020
A new model called Y-Autoencoder (Y-AE) is introduced, which provides significant experimental results on various domains, such as separation of style and content, image-to-image translation, and inverse graphics. Expand
DeLiGAN: Generative Adversarial Networks for Diverse and Limited Data
• Computer Science
• 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
• 2017
The proposed DeLiGAN can generate images of handwritten digits, objects and hand-drawn sketches, all using limited amounts of data, and introduces a modified version of inception-score, a measure which has been found to correlate well with human assessment of generated samples. Expand
• Computer Science, Mathematics
• ICML
• 2018
This work proposes a flexible method for training deep latent variable models of discrete structures based on the recently-proposed Wasserstein autoencoder (WAE), and shows that the latent representation can be trained to perform unaligned textual style transfer, giving improvements both in automatic/human evaluation compared to existing methods. Expand
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
• Computer Science, Mathematics
• NIPS
• 2016
Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods. Expand
Isolating Sources of Disentanglement in Variational Autoencoders
• Computer Science, Mathematics
• NeurIPS
• 2018
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our $\beta$-TCVAE (Total CorrelationExpand