• Corpus ID: 231740460

Semi-Supervised Disentanglement of Class-Related and Class-Independent Factors in VAE

@article{Hajimiri2021SemiSupervisedDO,
  title={Semi-Supervised Disentanglement of Class-Related and Class-Independent Factors in VAE},
  author={Sina Hajimiri and Aryo Lotfi and Mahdieh Soleymani Baghshah},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.00892}
}
In recent years, extending variational autoencoder’s framework to learn disentangled representations has received much attention. We address this problem by proposing a framework capable of disentangling class-related and class-independent factors of variation in data. Our framework employs an attention mechanism in its latent space in order to improve the process of extracting class-related factors from data. We also deal with the multimodality of data distribution by utilizing mixture models… 

EXoN: EXplainable encoder Network

A new semi-supervised learning method of Vari- ational AutoEncoder (VAE) which yields a customized and explainable latent space by EXplainable encoder Network (EXoN) and reduces the cost of investigating representation patterns on the latent space.

Class-Disentanglement and Applications in Adversarial Detection and Defense

“class-disentanglement” is proposed that trains a variational autoencoder G ( ·) to extract this class-dependent information as x − G ( x ) via a trade-off between reconstructing x by G (x ) and classifying x by D ( x −G ( x )) , where the former competes with the latter in decomposing x so the latter retains only necessary information.

Harmony: A Generic Unsupervised Approach for Disentangling Semantic Content from Parameterized Transformations

Harmony achieves significantly improved disentanglement over the baseline models on several image datasets of diverse domains and is generalizable to many other imaging domains and can potentially be extended to domains beyond imaging as well.

References

SHOWING 1-10 OF 35 REFERENCES

Semi-supervised Disentanglement with Independent Vector Variational Autoencoders

Experiments conducted on several image datasets demonstrate that the disentanglement achieved via the variational autoencoder method can improve classification performance and generation controllability.

beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial

Learning Disentangled Representations with Semi-Supervised Deep Generative Models

This work proposes to learn disentangled representations that encode distinct aspects of the data into separate variables using model architectures that generalise from standard VAEs, employing a general graphical model structure in the encoder and decoder.

Disentangling and Learning Robust Representations with Natural Clustering

  • Javier AntoránA. Miguel
  • Computer Science
    2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)
  • 2019
This work proposes N-VAE, a model which is capable of separating factors of variation which are exclusive to certain classes from factors that are shared among classes, and implements an explicitly compositional latent variable structure.

Isolating Sources of Disentanglement in Variational Autoencoders

We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our $\beta$-TCVAE (Total Correlation

Semi-Supervised StyleGAN for Disentanglement Learning

The impact of limited supervision is investigated, new metrics to quantify generator controllability are proposed, and there may exist a crucial trade-off between disentangled representation learning and controllable generation.

Structured Disentangled Representations

Experiments on a variety of datasets demonstrate that the proposed two-level hierarchical objective can not only disentangle discrete variables, but that doing so also improves disentanglement of other variables and, importantly, generalization even to unseen combinations of factors.

Guided Variational Autoencoder for Disentanglement Learning

An algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning by providing signal to the latent encoding/embedding in VAE without changing its main backbone architecture.

Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations

This paper theoretically shows that the unsupervised learning of disentangled representations is fundamentally impossible without inductive biases on both the models and the data, and trains more than 12000 models covering most prominent methods and evaluation metrics on seven different data sets.

Auto-Encoding Total Correlation Explanation

An information-theoretic approach to characterizing disentanglement and dependence in representation learning using multivariate mutual information, also called total correlation, is proposed and it is found that this lower bound is equivalent to the one in variational autoencoders (VAE) under certain conditions.