Disentangled Representations from Non-Disentangled Models
@article{Khrulkov2021DisentangledRF, title={Disentangled Representations from Non-Disentangled Models}, author={Valentin Khrulkov and Leyla Mirvakhabova and I. Oseledets and Artem Babenko}, journal={ArXiv}, year={2021}, volume={abs/2102.06204} }
Constructing disentangled representations is known to be a difficult task, especially in the unsupervised scenario. The dominating paradigm of unsupervised disentanglement is currently to train a generative model that separates different factors of variation in its latent space. This separation is typically enforced by training with specific regularization terms in the model’s objective function. These terms, however, introduce additional hyperparameters responsible for the trade-off between…
Figures and Tables from this paper
4 Citations
Learning Disentangled Representation by Exploiting Pretrained Generative Models: A Contrastive Learning View
- Computer Science
- 2021
Disentaglement via Contrast (DisCo) achieves the state-of-the-art disentangled representation learning and distinct direction discovering, given pretrained nondisentangled generative models including GAN, VAE, and Flow.
Generative Modeling Helps Weak Supervision (and Vice Versa)
- Computer ScienceArXiv
- 2022
This work proposes a model fusing programmatic weak supervision and generative adversarial networks and provides theoretical justification motivating this fusion, and is the first approach to enable data augmentation through weakly supervised synthetic images and pseudolabels.
Self-supervised Enhancement of Latent Discovery in GANs
- Computer ScienceArXiv
- 2021
Scale Ranking Estimator (SRE) is proposed, which is trained using self-supervision and enhances the disentanglement in directions obtained by existing unsupervised disentangled techniques.
Visual Concepts Tokenization
- Computer Science
- 2022
An unsupervised transformer-based Visual Concepts Tokenization framework, dubbed VCT, to perceive an image into a set of disentangled visual concept tokens, with each concept token responding to one type of independent visual concept.
References
SHOWING 1-10 OF 54 REFERENCES
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
- Computer ScienceICML
- 2019
This paper theoretically shows that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data, and trains more than 12000 models covering most prominent methods and evaluation metrics on seven different data sets.
Learning Deep Disentangled Embeddings with the F-Statistic Loss
- Computer ScienceNeurIPS
- 2018
A new paradigm for discovering disentangled representations of class structure is proposed and a novel loss function based on the $F$ statistic is proposed, which describes the separation of two or more distributions.
InfoGAN-CR and ModelCentrality: Self-supervised Model Training and Selection for Disentangling GANs
- Computer ScienceICML
- 2020
This work designs a novel approach for training disentangled GANs with self-supervision and proposes contrastive regularizer, which is inspired by a natural notion of disentanglement: latent traversal, and proposes an unsupervised model selection scheme called ModelCentrality, which uses generated synthetic samples to compute the medoid of a collection of models.
Semi-Supervised StyleGAN for Disentanglement Learning
- Computer ScienceICML
- 2020
The impact of limited supervision is investigated, new metrics to quantify generator controllability are proposed, and there may exist a crucial trade-off between disentangled representation learning and controllable generation.
Variational Inference of Disentangled Latent Concepts from Unlabeled Observations
- Computer ScienceICLR
- 2018
This work considers the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and proposes a variational inference based approach to inferdisentangled latent factors.
Disentangling Disentanglement in Variational Autoencoders
- Computer ScienceICML
- 2019
We develop a generalisation of disentanglement in VAEs---decomposition of the latent representation---characterising it as the fulfilment of two factors: a) the latent encodings of the data having an…
A Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation
- Computer ScienceJ. Mach. Learn. Res.
- 2020
Theoretically show that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data, and investigate concrete benefits of enforcing disentanglement of the learned representations and consider a reproducible experimental setup covering several data sets.
Structured Disentangled Representations
- Computer ScienceAISTATS
- 2019
Experiments on a variety of datasets demonstrate that the proposed two-level hierarchical objective can not only disentangle discrete variables, but that doing so also improves disentanglement of other variables and, importantly, generalization even to unseen combinations of factors.
Isolating Sources of Disentanglement in Variational Autoencoders
- Computer ScienceNeurIPS
- 2018
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our $\beta$-TCVAE (Total Correlation…
The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement
- Computer ScienceECCV
- 2020
A model-agnostic, unbiased stochastic approximation of this term based on Hutchinson's estimator to compute it efficiently during training and provides empirical evidence that the Hessian Penalty encourages substantial shrinkage when applied to over-parameterized latent spaces.