• Corpus ID: 231740460

# Semi-Supervised Disentanglement of Class-Related and Class-Independent Factors in VAE

@article{Hajimiri2021SemiSupervisedDO,
title={Semi-Supervised Disentanglement of Class-Related and Class-Independent Factors in VAE},
author={Sina Hajimiri and Aryo Lotfi and Mahdieh Soleymani Baghshah},
journal={ArXiv},
year={2021},
volume={abs/2102.00892}
}
• Published 1 February 2021
• Computer Science
• ArXiv
In recent years, extending variational autoencoder’s framework to learn disentangled representations has received much attention. We address this problem by proposing a framework capable of disentangling class-related and class-independent factors of variation in data. Our framework employs an attention mechanism in its latent space in order to improve the process of extracting class-related factors from data. We also deal with the multimodality of data distribution by utilizing mixture models…

## Figures and Tables from this paper

### EXoN: EXplainable encoder Network

• Computer Science
ArXiv
• 2021
A new semi-supervised learning method of Vari- ational AutoEncoder (VAE) which yields a customized and explainable latent space by EXplainable encoder Network (EXoN) and reduces the cost of investigating representation patterns on the latent space.

### Class-Disentanglement and Applications in Adversarial Detection and Defense

• Computer Science
NeurIPS
• 2021
“class-disentanglement” is proposed that trains a variational autoencoder G ( ·) to extract this class-dependent information as x − G ( x ) via a trade-off between reconstructing x by G (x ) and classifying x by D ( x −G ( x )) , where the former competes with the latter in decomposing x so the latter retains only necessary information.

### Harmony: A Generic Unsupervised Approach for Disentangling Semantic Content from Parameterized Transformations

• Computer Science
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2022
Harmony achieves significantly improved disentanglement over the baseline models on several image datasets of diverse domains and is generalizable to many other imaging domains and can potentially be extended to domains beyond imaging as well.

## References

SHOWING 1-10 OF 35 REFERENCES

### Semi-supervised Disentanglement with Independent Vector Variational Autoencoders

• Computer Science
ArXiv
• 2020
Experiments conducted on several image datasets demonstrate that the disentanglement achieved via the variational autoencoder method can improve classification performance and generation controllability.

### beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

• Computer Science
ICLR
• 2017
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial

### Learning Disentangled Representations with Semi-Supervised Deep Generative Models

• Computer Science
NIPS
• 2017
This work proposes to learn disentangled representations that encode distinct aspects of the data into separate variables using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder.

### Disentangling and Learning Robust Representations with Natural Clustering

• Computer Science
2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)
• 2019
This work proposes N-VAE, a model which is capable of separating factors of variation which are exclusive to certain classes from factors that are shared among classes, and implements an explicitly compositional latent variable structure.

### Isolating Sources of Disentanglement in Variational Autoencoders

• Computer Science
NeurIPS
• 2018
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our $\beta$-TCVAE (Total Correlation

### Semi-Supervised StyleGAN for Disentanglement Learning

• Computer Science
ICML
• 2020
The impact of limited supervision is investigated, new metrics to quantify generator controllability are proposed, and there may exist a crucial trade-off between disentangled representation learning and controllable generation.

### Structured Disentangled Representations

• Computer Science
AISTATS
• 2019
Experiments on a variety of datasets demonstrate that the proposed two-level hierarchical objective can not only disentangle discrete variables, but that doing so also improves disentanglement of other variables and, importantly, generalization even to unseen combinations of factors.

### Guided Variational Autoencoder for Disentanglement Learning

• Computer Science
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2020
An algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning by providing signal to the latent encoding/embedding in VAE without changing its main backbone architecture.

### Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

• Computer Science
ICML
• 2019
This paper theoretically shows that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data, and trains more than 12000 models covering most prominent methods and evaluation metrics on seven different data sets.

### Auto-Encoding Total Correlation Explanation

• Computer Science
AISTATS
• 2019
An information-theoretic approach to characterizing disentanglement and dependence in representation learning using multivariate mutual information, also called total correlation, is proposed and it is found that this lower bound is equivalent to the one in variational autoencoders (VAE) under certain conditions.