DOT-VAE: Disentangling One Factor at a Time

@article{Patil2022DOTVAEDO,
title={DOT-VAE: Disentangling One Factor at a Time},
author={Vaishnavi Patil and Matthew Evanusa and Joseph J{\'a}J{\'a}},
journal={ArXiv},
year={2022},
volume={abs/2210.10920}
}
• Published 19 October 2022
• Computer Science
• ArXiv
. As we enter the era of machine learning characterized by an overabundance of data, discovery, organization, and interpretation of the data in an unsupervised manner becomes a critical need. One promising approach to this endeavour is the problem of Disentanglement , which aims at learning the underlying generative latent factors, called the factors of variation, of the data and encoding them in disjoint latent representations. Recent advances have made eﬀorts to solve this problem for…

References

SHOWING 1-10 OF 33 REFERENCES

• Computer Science
ICLR
• 2017
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial
• Computer Science
ICML
• 2020
This work designs a novel approach for training disentangled GANs with self-supervision and proposes contrastive regularizer, which is inspired by a natural notion of disentanglement: latent traversal, and proposes an unsupervised model selection scheme called ModelCentrality, which uses generated synthetic samples to compute the medoid of a collection of models.
• Computer Science
ICML
• 2019
This paper theoretically shows that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data, and trains more than 12000 models covering most prominent methods and evaluation metrics on seven different data sets.
The proposed variation predictability is a general constraint that is applicable to the VAE and GAN frameworks for boosting disentangled latent representations and correlates well with existing ground-truth-required metrics and the proposed algorithm is effective for disentanglement learning.
• Computer Science
NeurIPS
• 2018
A new paradigm for discovering disentangled representations of class structure is proposed and a novel loss function based on the $F$ statistic is proposed, which describes the separation of two or more distributions.
• Computer Science
ICML
• 2019
A simple procedure for minimizing the total correlation of the continuous latent variables without having to use a discriminator network or perform importance sampling, via cascading the information flow in the $\beta$-vae framework is shown.
• Computer Science
ArXiv
• 2018
A modification to the training regime of β-VAE is proposed, that progressively increases the information capacity of the latent code during training, to facilitate the robust learning of disentangled representations in β- VAE, without the previous trade-off in reconstruction accuracy.
• Computer Science
NeurIPS
• 2018
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our $\beta$-TCVAE (Total Correlation
• Computer Science
AAAI
• 2020
It is shown that GANs have a natural advantage in disentangling with an alternating latent variable (noise) sampling method that is straightforward and robust and a brand-new perspective on designing the structure of the generator and discriminator is provided, demonstrating that a minor structural change and an orthogonal regularization on model weights entails an improved disentanglement.
• Computer Science
ICML
• 2018
FactorVAE, a method that disentangles by encouraging the distribution of representations to be factorial and hence independent across the dimensions, is proposed and it improves upon $\beta$-VAE by providing a better trade-off between disentanglement and reconstruction quality.