• Corpus ID: 203641804

# Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Imbalanced Data

@article{Ojha2019ElasticInfoGANUD,
title={Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Imbalanced Data},
author={Utkarsh Ojha and Krishna Kumar Singh and Cho-Jui Hsieh and Yong Jae Lee},
journal={ArXiv},
year={2019},
volume={abs/1910.01112}
}
• Published 25 September 2019
• Computer Science
• ArXiv
We propose a novel unsupervised generative model, Elastic-InfoGAN, that learns to disentangle object identity from other low-level aspects in class-imbalanced datasets. We first investigate the issues surrounding the assumptions about uniformity made by InfoGAN, and demonstrate its ineffectiveness to properly disentangle object identity in imbalanced data. Our key idea is to make the discovery of the discrete latent factor of variation invariant to identity-preserving transformations in real…
2 Citations

## Figures and Tables from this paper

• Kunming LuoAo Luo
• Computer Science
IEEE Transactions on Circuits and Systems for Video Technology
• 2022
This work presents an unsupervised optical flow estimation method by proposing an adaptive pyramid sampling in the deep pyramid network, and proposes a Content-Aware Pooling module, which promotes local feature gathering by avoiding cross region pooling, so that the learned features become more representative.

## References

SHOWING 1-10 OF 62 REFERENCES

• Computer Science
NIPS
• 2016
Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.
• Computer Science
ICLR
• 2017
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial
• Computer Science
ICLR
• 2018
A framework for the quantitative evaluation of disentangled representations when the ground-truth latent structure is available is proposed and three criteria are explicitly defined and quantified to elucidate the quality of learnt representations and thus compare models on an equal basis.
• Computer Science
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2019
FineGAN, a novel unsupervised GAN framework, which disentangles the background, object shape, and object appearance to hierarchically generate images of fine-grained object categories achieves the desired disentanglement.
• Computer Science
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
• 2018
This work proposes an approach to learn image representations that consist of disentangled factors of variation without exploiting any manual labeling or data domain knowledge, and includes a classification objective, which ensures that each chunk corresponds to a consistently discernible attribute in the represented image.
• Computer Science
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
• 2016
The representation learned by this approach, when combined with a simple k-nearest neighbor (kNN) algorithm, shows significant improvements over existing methods on both high- and low-level vision classification tasks that exhibit imbalanced class distribution.
• Computer Science
NIPS
• 2016
A conditional generative model for learning to disentangle the hidden factors of variation within a set of labeled observations, and separate them into complementary codes that are capable of generalizing to unseen classes and intra-class variabilities.
• Computer Science
ICML
• 2017
In IMSAT, data augmentation is used to impose the invari-ance on discrete representations and the predicted representations of augmented data points to be close to those of the original data points in an end-to-end fashion to maximize the information-theoretic dependency between data and their predicted discrete representations.
• Computer Science
ICML
• 2018
FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions, is proposed and it improves upon $\beta$-VAE by providing a better trade-off between disentanglement and reconstruction quality.
Experiments show that the framework disentangles continuous and discreteGenerative factors on various datasets and outperforms current disentangling methods when a discrete generative factor is prominent.