Corpus ID: 237532511

# DisUnknown: Distilling Unknown Factors for Disentanglement Learning

@article{Xiang2021DisUnknownDU,
title={DisUnknown: Distilling Unknown Factors for Disentanglement Learning},
author={Sitao Xiang and Yuming Gu and Pengda Xiang and Menglei Chai and Hao Li and Yajie Zhao and Mingming He},
journal={ArXiv},
year={2021},
volume={abs/2109.08090}
}
• Sitao Xiang, Yuming Gu, +4 authors Mingming He
• Published 16 September 2021
• Computer Science
• ArXiv
Disentangling data into interpretable and independent factors is critical for controllable generation tasks. With the availability of labeled data, supervision can help enforce the separation of specific factors as expected. However, it is often expensive or even impossible to label every single factor to achieve fully-supervised disentanglement. In this paper, we adopt a general setting where all factors that are hard to label or identify are encapsulated as a single unknown factor. Under this… Expand

#### References

SHOWING 1-10 OF 62 REFERENCES
Disentangling factors of variation in deep representation using adversarial training
• Computer Science, Mathematics
• NIPS
• 2016
A conditional generative model for learning to disentangle the hidden factors of variation within a set of labeled observations, and separate them into complementary codes that are capable of generalizing to unseen classes and intra-class variabilities. Expand
Multi-Level Variational Autoencoder: Learning Disentangled Representations from Grouped Observations
• Computer Science, Mathematics
• AAAI
• 2018
The Multi-Level Variational Autoencoder (ML-VAE), a new deep probabilistic model for learning a disentangled representation of a set of grouped observations, separates the latent representation into semantically meaningful parts by working both at the group level and the observation level, while retaining efficient test-time inference. Expand
Semi-supervised Disentanglement with Independent Vector Variational Autoencoders
• Computer Science, Mathematics
• ArXiv
• 2020
Experiments conducted on several image datasets demonstrate that the disentanglement achieved via the variational autoencoder method can improve classification performance and generation controllability. Expand
Weakly Supervised Disentanglement by Pairwise Similarities
• Computer Science, Mathematics
• AAAI
• 2020
Experimental results demonstrate that utilizing weak supervision improves the performance of the disentanglement method substantially. Expand
Demystifying Inter-Class Disentanglement
• Computer Science, Mathematics
• ICLR
• 2020
LORD, a novel method based on Latent Optimization for Representation Disentanglement, finds that latent optimization, along with an asymmetric noise regularization, is superior to amortized inference for achieving disentangled representations. Expand
Dual Swap Disentangling
• Computer Science
• NeurIPS
• 2018
This paper proposes a weakly semi-supervised method, termed as Dual Swap Disentangling (DSD), for disentangling using both labeled and unlabeled data, which imposes the dimension-wise modularity and portability of the encodings of the unlabeling samples, which implicitly encourages disentangled under the guidance of labeled pairs. Expand
Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations
• Computer Science, Mathematics
• ICML
• 2019
This paper theoretically shows that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data, and trains more than 12000 models covering most prominent methods and evaluation metrics on seven different data sets. Expand
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificialExpand
Isolating Sources of Disentanglement in Variational Autoencoders
• Computer Science, Mathematics
• NeurIPS
• 2018
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our $\beta$-TCVAE (Total CorrelationExpand
Learning Disentangled Representations with Semi-Supervised Deep Generative Models
This work proposes to learn disentangled representations that encode distinct aspects of the data into separate variables using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder. Expand