• Corpus ID: 3638140

Isolating Sources of Disentanglement in Variational Autoencoders

@inproceedings{Chen2018IsolatingSO,
  title={Isolating Sources of Disentanglement in Variational Autoencoders},
  author={Tian Qi Chen and Xuechen Li and Roger B. Grosse and David Kristjanson Duvenaud},
  booktitle={NeurIPS},
  year={2018}
}
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our $\beta$-TCVAE (Total Correlation Variational Autoencoder), a refinement of the state-of-the-art $\beta$-VAE objective for learning disentangled representations, requiring no additional hyperparameters during training. We further propose a principled classifier-free measure of disentanglement called the mutual information gap (MIG). We… 

Figures and Tables from this paper

Contrastively Disentangled Sequential Variational Autoencoder
TLDR
This work proposes a novel sequence representation learning method, named Contrastively Disentangled Sequential Variational Autoencoder (C-DSVAE), to extract and separate the static and dynamic factors in the latent space, using a novel evidence lower bound.
Learning Disentangled Representations with Attentive Joint Variational Autoencoder
TLDR
This paper proposes a novel Attentive Joint Variational Autoencoder (AJVAE), which generates intermediate continuous latent variables in the encoding process, and explicitly explore the underlying instinct varieties and diversities which are implicitly contain in training samples and fuse them.
Semi-supervised Disentanglement with Independent Vector Variational Autoencoders
TLDR
Experiments conducted on several image datasets demonstrate that the disentanglement achieved via the variational autoencoder method can improve classification performance and generation controllability.
Disentangled Representation Learning with Wasserstein Total Correlation
TLDR
It is shown that the proposed approach has comparable performances on disentanglement with smaller sacrifices in reconstruction abilities, and a new metric to measure disentangled latent representations is introduced.
Disentangling Autoencoders (DAE)
TLDR
A novel, non-probabilistic disentangling framework for autoencoders, based on the principles of symmetry transformations in group-theory, that can have better disentanglement when variances of each features are different is proposed.
On Disentanglement and Mutual Information in Semi-Supervised Variational Auto-Encoders
TLDR
This abstract considers the semisupervised setting, in which the factors of variation are labelled for a small fraction of the authors' samples, and examines how the quality of learned representations is affected by the dimension of the unsupervised component of the latent space.
NON-SYN VARIATIONAL AUTOENCODERS
  • Computer Science
  • 2018
TLDR
This paper addresses the task of disentanglement and introduces a new state-ofthe-art approach called Non-synergistic variational Autoencoder (Non-Syn VAE), where the notion of synergy arises when the encoded information by neurons in the form of responses from the stimuli is described.
GCVAE: Generalized-Controllable Variational AutoEncoder
TLDR
This work presents a generalized framework to handle the trade-off between attaining extremely low reconstruction error and a high disentanglement score and proves that maximizing information in the reconstruction network is equivalent to information maximization during amortized inference under reasonable assumptions and constraint relaxation.
PRI-VAE: Principle-of-Relevant-Information Variational Autoencoders
TLDR
This work first proposes a novel learning objective, termed the principle-of-relevant-information variational autoencoder (PRI-VAE), to learn disentangled representations, and presents an information-theoretic perspective to analyze existing VAE models by inspecting the evolution of some critical information- theoretic quantities across training epochs.
Weakly Supervised Disentanglement by Pairwise Similarities
TLDR
Experimental results demonstrate that utilizing weak supervision improves the performance of the disentanglement method substantially.
...
...

References

SHOWING 1-10 OF 55 REFERENCES
Variational Inference of Disentangled Latent Concepts from Unlabeled Observations
TLDR
This work considers the problem of unsupervised learning of disentangled representations from large pool of unlabeled observations, and proposes a variational inference based approach to inferdisentangled latent factors.
THE VARIATIONAL FAIR AUTOENCODER
TLDR
This model is based on a variational autoencoding architecture with priors that encourage independence between sensitive and latent factors of variation with an additional penalty term based on the “Maximum Mean Discrepancy” (MMD) measure.
Disentangling Factors of Variation via Generative Entangling
TLDR
This work proposes a novel model family based on the spike-and-slab restricted Boltzmann machine which is generalize to include higher-order interactions among multiple latent variables and applies it to the task of facial expression classification.
Auto-Encoding Total Correlation Explanation
TLDR
An information-theoretic approach to characterizing disentanglement and dependence in representation learning using multivariate mutual information, also called total correlation, is proposed and it is found that this lower bound is equivalent to the one in variational autoencoders (VAE) under certain conditions.
InfoVAE: Information Maximizing Variational Autoencoders
TLDR
It is shown that this model can significantly improve the quality of the variational posterior and can make effective use of the latent features regardless of the flexibility of the decoding distribution, and it is demonstrated that the models outperform competing approaches on multiple performance metrics.
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial
Disentangling by Factorising
TLDR
FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions, is proposed and it improves upon $\beta$-VAE by providing a better trade-off between disentanglement and reconstruction quality.
An Information-Theoretic Analysis of Deep Latent-Variable Models
TLDR
An information-theoretic framework for understanding trade-offs in unsupervised learning of deep latent-variables models using variational inference and how this framework sheds light on many recent proposed extensions to the variational autoencoder family is presented.
A Framework for the Quantitative Evaluation of Disentangled Representations
TLDR
A framework for the quantitative evaluation of disentangled representations when the ground-truth latent structure is available is proposed and three criteria are explicitly defined and quantified to elucidate the quality of learnt representations and thus compare models on an equal basis.
On the Emergence of Invariance and Disentangling in Deep Representations
TLDR
It is shown that invariance in a deep neural network is equivalent to minimality of the representation it computes, and can be achieved by stacking layers and injecting noise in the computation, under realistic and empirically validated assumptions.
...
...