Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Imbalanced Data
@article{Ojha2019ElasticInfoGANUD, title={Elastic-InfoGAN: Unsupervised Disentangled Representation Learning in Imbalanced Data}, author={Utkarsh Ojha and Krishna Kumar Singh and Cho-Jui Hsieh and Yong Jae Lee}, journal={ArXiv}, year={2019}, volume={abs/1910.01112} }
We propose a novel unsupervised generative model, Elastic-InfoGAN, that learns to disentangle object identity from other low-level aspects in class-imbalanced datasets. We first investigate the issues surrounding the assumptions about uniformity made by InfoGAN, and demonstrate its ineffectiveness to properly disentangle object identity in imbalanced data. Our key idea is to make the discovery of the discrete latent factor of variation invariant to identity-preserving transformations in real…
2 Citations
Conditional Generation of Medical Images via Disentangled Adversarial Inference
- Computer ScienceMedical Image Anal.
- 2021
ASFlow: Unsupervised Optical Flow Learning With Adaptive Pyramid Sampling
- Computer ScienceIEEE Transactions on Circuits and Systems for Video Technology
- 2022
This work presents an unsupervised optical flow estimation method by proposing an adaptive pyramid sampling in the deep pyramid network, and proposes a Content-Aware Pooling module, which promotes local feature gathering by avoiding cross region pooling, so that the learned features become more representative.
References
SHOWING 1-10 OF 62 REFERENCES
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets
- Computer ScienceNIPS
- 2016
Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
- Computer ScienceICLR
- 2017
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial…
A Framework for the Quantitative Evaluation of Disentangled Representations
- Computer ScienceICLR
- 2018
A framework for the quantitative evaluation of disentangled representations when the ground-truth latent structure is available is proposed and three criteria are explicitly defined and quantified to elucidate the quality of learnt representations and thus compare models on an equal basis.
FineGAN: Unsupervised Hierarchical Disentanglement for Fine-Grained Object Generation and Discovery
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
FineGAN, a novel unsupervised GAN framework, which disentangles the background, object shape, and object appearance to hierarchically generate images of fine-grained object categories achieves the desired disentanglement.
Disentangling Factors of Variation by Mixing Them
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This work proposes an approach to learn image representations that consist of disentangled factors of variation without exploiting any manual labeling or data domain knowledge, and includes a classification objective, which ensures that each chunk corresponds to a consistently discernible attribute in the represented image.
Learning Deep Representation for Imbalanced Classification
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
The representation learned by this approach, when combined with a simple k-nearest neighbor (kNN) algorithm, shows significant improvements over existing methods on both high- and low-level vision classification tasks that exhibit imbalanced class distribution.
Disentangling factors of variation in deep representation using adversarial training
- Computer ScienceNIPS
- 2016
A conditional generative model for learning to disentangle the hidden factors of variation within a set of labeled observations, and separate them into complementary codes that are capable of generalizing to unseen classes and intra-class variabilities.
Learning Discrete Representations via Information Maximizing Self-Augmented Training
- Computer ScienceICML
- 2017
In IMSAT, data augmentation is used to impose the invari-ance on discrete representations and the predicted representations of augmented data points to be close to those of the original data points in an end-to-end fashion to maximize the information-theoretic dependency between data and their predicted discrete representations.
Disentangling by Factorising
- Computer ScienceICML
- 2018
FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions, is proposed and it improves upon $\beta$-VAE by providing a better trade-off between disentanglement and reconstruction quality.
Learning Disentangled Joint Continuous and Discrete Representations
- Computer ScienceNeurIPS
- 2018
Experiments show that the framework disentangles continuous and discreteGenerative factors on various datasets and outperforms current disentangling methods when a discrete generative factor is prominent.