Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
@article{Locatello2019ChallengingCA, title={Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations}, author={Francesco Locatello and Stefan Bauer and Mario Lucic and Sylvain Gelly and Bernhard Sch{\"o}lkopf and Olivier Bachem}, journal={ArXiv}, year={2019}, volume={abs/1811.12359} }
The key idea behind the unsupervised learning of disentangled representations is that real-world data is generated by a few explanatory factors of variation which can be recovered by unsupervised learning algorithms. [] Key Method We first theoretically show that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data.
Figures and Tables from this paper
735 Citations
Disentangling Factors of Variation Using Few Labels
- Computer ScienceICLR
- 2020
Overall, this paper empirically validate that with little and imprecise supervision it is possible to reliably learn disentangled representations.
Disentangled Representations from Non-Disentangled Models
- Computer ScienceArXiv
- 2021
This paper proposes to extract disentangled representations from the state-ofthe-art generative models trained without disentangling terms in their objectives, and employs little or no hyperparameters when learning representations while achieving results on par with existing state of theart models.
A Heuristic for Unsupervised Model Selection for Variational Disentangled Representation Learning
- Computer ScienceICLR
- 2020
The approach, Unsupervised Disentanglement Ranking (UDR), leverages the recent theoretical results that explain why variational autoencoders disentangle, to quantify the quality of disentangled representations by performing pairwise comparisons between trained model representations.
Group-disentangled Representation Learning with Weakly-Supervised Regularization
- Computer ScienceArXiv
- 2021
GroupVAE is proposed, a simple yet effective Kullback-Leibler divergence-based regularization across shared latent representations to enforce consistent and disentangled representations that improve upon downstream tasks, including fair classification and 3D shape-related tasks.
When Is Unsupervised Disentanglement Possible?
- Computer ScienceNeurIPS
- 2021
The results suggest that in some realistic settings, unsupervised disentanglement is provably possible, without any domain-specific assumptions.
Weakly-Supervised Disentanglement Without Compromises
- Computer ScienceICML
- 2020
This work theoretically shows that only knowing how many factors have changed, but not which ones, is sufficient to learn disentangled representations, and provides practical algorithms that learn disENTangled representations from pairs of images without requiring annotation of groups, individual factors, or the number of factors that have changed.
Unsupervised learning of disentangled representations in deep restricted kernel machines with orthogonality constraints
- Computer ScienceNeural Networks
- 2021
Quantifying and Learning Disentangled Representations with Limited Supervision
- Computer ScienceArXiv
- 2020
A metric to quantify Linear Symmetry-Based Disentanglement representations that is easy to compute under certain well-defined assumptions and present a method that can leverage unlabeled data, such that LSBD representations can be learned with limited supervision on transformations.
Theory and Evaluation Metrics for Learning Disentangled Representations
- Computer ScienceICLR
- 2020
This work characterize the concept "disentangled representations" used in supervised and unsupervised methods along three dimensions-informativeness, separability and interpretability - which can be expressed and quantified explicitly using information-theoretic constructs.
Is Independence all you need? On the Generalization of Representations Learned from Correlated Data
- Computer ScienceArXiv
- 2020
This work bridges the gap to real-world scenarios by analyzing the behavior of most prominent methods and disentanglement scores on correlated data in a large scale empirical study (including 3900 models).
References
SHOWING 1-10 OF 72 REFERENCES
Learning Deep Disentangled Embeddings with the F-Statistic Loss
- Computer ScienceNeurIPS
- 2018
A new paradigm for discovering disentangled representations of class structure is proposed and a novel loss function based on the $F$ statistic is proposed, which describes the separation of two or more distributions.
Learning Disentangled Representations with Semi-Supervised Deep Generative Models
- Computer ScienceNIPS
- 2017
This work proposes to learn disentangled representations that encode distinct aspects of the data into separate variables using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder.
A Framework for the Quantitative Evaluation of Disentangled Representations
- Computer ScienceICLR
- 2018
A framework for the quantitative evaluation of disentangled representations when the ground-truth latent structure is available is proposed and three criteria are explicitly defined and quantified to elucidate the quality of learnt representations and thus compare models on an equal basis.
Multi-Level Variational Autoencoder: Learning Disentangled Representations from Grouped Observations
- Computer ScienceAAAI
- 2018
The Multi-Level Variational Autoencoder (ML-VAE), a new deep probabilistic model for learning a disentangled representation of a set of grouped observations, separates the latent representation into semantically meaningful parts by working both at the group level and the observation level, while retaining efficient test-time inference.
Interventional Robustness of Deep Latent Variable Models
- Computer ScienceArXiv
- 2018
The interventional robustness score is introduced, which provides a quantitative evaluation of robustness in learned representations with respect to interventions on generative factors and changing nuisance factors, and how this score can be estimated from labeled observational data, that may be confounded, and further provide an efficient algorithm that scales linearly in the dataset size.
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
- Computer ScienceICLR
- 2017
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial…
Variational Inference of Disentangled Latent Concepts from Unlabeled Observations
- Computer ScienceICLR
- 2018
This work considers the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and proposes a variational inference based approach to inferdisentangled latent factors.
Competitive Training of Mixtures of Independent Deep Generative Models
- Computer Science
- 2018
This work considers mixtures of implicit generative models that ``disentangle'' the independent generative mechanisms underlying the data and proposes a competitive training procedure in which the models only need to capture the portion of the data distribution from which they can produce realistic samples.
Recent Advances in Autoencoder-Based Representation Learning
- Computer ScienceArXiv
- 2018
An in-depth review of recent advances in representation learning with a focus on autoencoder-based models and makes use of meta-priors believed useful for downstream tasks, such as disentanglement and hierarchical organization of features.
Robustly Disentangled Causal Mechanisms: Validating Deep Representations for Interventional Robustness
- Computer ScienceICML
- 2019
A causal perspective on representation learning is provided which covers disentanglement and domain shift robustness as special cases and a new metric for the quantitative evaluation of deep latent variable models is introduced.