Corpus ID: 168169556

A Heuristic for Unsupervised Model Selection for Variational Disentangled Representation Learning

@article{Duan2020AHF,
  title={A Heuristic for Unsupervised Model Selection for Variational Disentangled Representation Learning},
  author={Sunny Duan and Nicholas Watters and Lo{\"i}c Matthey and Christopher P. Burgess and Alexander Lerchner and Irina Higgins},
  journal={ArXiv},
  year={2020},
  volume={abs/1905.12614}
}
Disentangled representations have recently been shown to improve fairness, data efficiency and generalisation in simple supervised and reinforcement learning tasks. To extend the benefits of disentangled representations to more complex domains and practical applications, it is important to enable hyperparameter tuning and model selection of existing unsupervised approaches without requiring access to ground truth attribute labels, which are not available for most datasets. This paper addresses… Expand
Robust Disentanglement of a Few Factors at a Time
TLDR
This work introduces population-based training (PBT) for improving consistency in training variational autoencoders (VAEs) and demonstrates the validity of this approach in a supervised setting and introduces the recursive rPU-VAE approach, which shows striking improvement in state-of-the-art unsupervised disentanglement performance and robustness across multiple datasets and metrics. Expand
A Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation
TLDR
Theoretically show that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data, and investigate concrete benefits of enforcing disentanglement of the learned representations and consider a reproducible experimental setup covering several data sets. Expand
Odd-One-Out Representation Learning
TLDR
A weakly-supervised downstream task based on odd-one-out observations is suitable for model selection by observing high correlation on a difficult downstream abstract visual reasoning task and is empirically show that a bespoke metric-learning VAE model which performs highly on this task also out-performs other standard unsupervised and a weakly -supervised disentanglement model across several metrics. Expand
Demystifying Inductive Biases for β-VAE Based Architectures
TLDR
Light is shed on the inductive bias responsible for the success of VAE-based architectures and it is shown that in classical datasets the structure of variance, induced by the generating factors, is conveniently aligned with the latent directions fostered by the VAE objective. Expand
GL-DISEN: GLOBAL-LOCAL DISENTANGLEMENT FOR UNSUPERVISED LEARNING OF GRAPH-LEVEL REPRE- SENTATIONS
  • 2020
Graph-level representation learning plays a crucial role in a variety of tasks such as molecular property prediction and community analysis. Currently, several models based on mutual informationExpand
Demystifying Inductive Biases for (Beta-)VAE Based Architectures
TLDR
Light is shed on the inductive bias responsible for the success of VAE-based architectures and it is shown that in classical datasets the structure of variance induced by the generating factors is conveniently aligned with the latent directions fostered by the VAE objective. Expand
Disentanglement and Local Directions of Variance
TLDR
This work quantifies the effects of global and local directions of variance in the data on disentanglement performance using proposed measures and seems to find empirical evidence of a negative effect of local variance directions on disENTanglement. Expand
A Commentary on the Unsupervised Learning of Disentangled Representations
TLDR
The theoretical result showing that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases and the practical challenges it entails is discussed. Expand
Learning Disentangled Representations in the Imaging Domain
TLDR
The need for disentangled representations is motivated, key theory, and detail practical building blocks and criteria for learning such representations are presented, and applications in medical imaging and computer vision are discussed emphasising choices made in exemplar key works. Expand
An Empirical Study of Uncertainty Gap for Disentangling Factors
TLDR
It is empirically found that the significant factor with the largest Uncertainty Gap should be disentangled before insignificant factors, which indicates that a suitable order of disentangling factors facilities the performance. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 59 REFERENCES
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
TLDR
This paper theoretically shows that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data, and trains more than 12000 models covering most prominent methods and evaluation metrics on seven different data sets. Expand
Learning Deep Disentangled Embeddings with the F-Statistic Loss
TLDR
A new paradigm for discovering disentangled representations of class structure is proposed and a novel loss function based on the $F$ statistic is proposed, which describes the separation of two or more distributions. Expand
A Framework for the Quantitative Evaluation of Disentangled Representations
TLDR
A framework for the quantitative evaluation of disentangled representations when the ground-truth latent structure is available is proposed and three criteria are explicitly defined and quantified to elucidate the quality of learnt representations and thus compare models on an equal basis. Expand
Variational Inference of Disentangled Latent Concepts from Unlabeled Observations
TLDR
This work considers the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and proposes a variational inference based approach to inferdisentangled latent factors. Expand
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificialExpand
Life-Long Disentangled Representation Learning with Cross-Domain Latent Homologies
TLDR
This work proposes a novel algorithm for unsupervised representation learning from piece-wise stationary visual data: Variational Autoencoder with Shared Embeddings (VASE), which automatically detects shifts in the data distribution and allocates spare representational capacity to new knowledge, while simultaneously protecting previously learnt representations from catastrophic forgetting. Expand
Interventional Robustness of Deep Latent Variable Models
TLDR
The interventional robustness score is introduced, which provides a quantitative evaluation of robustness in learned representations with respect to interventions on generative factors and changing nuisance factors, and how this score can be estimated from labeled observational data, that may be confounded, and further provide an efficient algorithm that scales linearly in the dataset size. Expand
Disentangling Disentanglement in Variational Autoencoders
We develop a generalisation of disentanglement in VAEs---decomposition of the latent representation---characterising it as the fulfilment of two factors: a) the latent encodings of the data having anExpand
Towards a Definition of Disentangled Representations
TLDR
It is suggested that those transformations that change only some properties of the underlying world state, while leaving all other properties invariant are what gives exploitable structure to any kind of data. Expand
Isolating Sources of Disentanglement in Variational Autoencoders
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our $\beta$-TCVAE (Total CorrelationExpand
...
1
2
3
4
5
...