# IntroVAC: Introspective Variational Classifiers for Learning Interpretable Latent Subspaces

@article{Maggipinto2022IntroVACIV,
title={IntroVAC: Introspective Variational Classifiers for Learning Interpretable Latent Subspaces},
author={Marco Maggipinto and M. Terzi and Gian Antonio Susto},
journal={Eng. Appl. Artif. Intell.},
year={2022},
volume={109},
pages={104658}
}
• Published 3 August 2020
• Computer Science
• Eng. Appl. Artif. Intell.
1 Citations

## Figures from this paper

### CAT: Controllable Attribute Translation for Fair Facial Attribute Classification

• Computer Science
• 2022
This work proposes an effective pipeline to generate high-quality and sufficient facial images with desired facial attributes and supplement the original dataset to be a balanced dataset at both levels, which theoretically satisfies several fairness criteria.

## References

SHOWING 1-10 OF 39 REFERENCES

### Learning Latent Subspaces in Variational Autoencoders

• Computer Science
NeurIPS
• 2018
A VAE-based generative model is proposed which is capable of extracting features correlated to binary labels in the data and structuring it in a latent subspace which is easy to interpret and demonstrate the utility of the learned representations for attribute manipulation tasks on both the Toronto Face and CelebA datasets.

### VEEGAN: Reducing Mode Collapse in GANs using Implicit Variational Learning

• Computer Science
NIPS
• 2017
VEEGAN is introduced, which features a reconstructor network, reversing the action of the generator by mapping from data to noise, and resists mode collapsing to a far greater extent than other recent GAN variants, and produces more realistic samples.

### Guided Variational Autoencoder for Disentanglement Learning

• Computer Science
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2020
An algorithm, guided variational autoencoder (Guided-VAE), that is able to learn a controllable generative model by performing latent representation disentanglement learning by providing signal to the latent encoding/embedding in VAE without changing its main backbone architecture.

• Computer Science
ICLR
• 2017
Bidirectional Generative Adversarial Networks are proposed as a means of learning the inverse mapping of GANs, and it is demonstrated that the resulting learned feature representation is useful for auxiliary supervised discrimination tasks, competitive with contemporary approaches to unsupervised and self-supervised feature learning.

### IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis

• Computer Science
NeurIPS
• 2018
A novel introspective variational autoencoder (IntroVAE) model for synthesizing high-resolution photographic images that is capable of self-evaluating the quality of its generated samples and improving itself accordingly and requires no extra discriminators.

### beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

• Computer Science
ICLR
• 2017
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial

• Computer Science
ArXiv
• 2015
This paper shows how the adversarial autoencoder can be used in applications such as semi-supervised classification, disentangling style and content of images, unsupervised clustering, dimensionality reduction and data visualization, and performed experiments on MNIST, Street View House Numbers and Toronto Face datasets.

### Autoencoding beyond pixels using a learned similarity metric

• Computer Science
ICML
• 2016
An autoencoder that leverages learned representations to better measure similarities in data space is presented and it is shown that the method learns an embedding in which high-level abstract visual features (e.g. wearing glasses) can be modified using simple arithmetic.

### Large Scale GAN Training for High Fidelity Natural Image Synthesis

• Computer Science
ICLR
• 2019
It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.

### Isolating Sources of Disentanglement in Variational Autoencoders

• Computer Science
NeurIPS
• 2018
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our $\beta$-TCVAE (Total Correlation