• Corpus ID: 246904884

Identifiable Deep Generative Models via Sparse Decoding

@inproceedings{Moran2021IdentifiableDG,
  title={Identifiable Deep Generative Models via Sparse Decoding},
  author={Gemma E. Moran and Dhanya Sridhar and Yixin Wang and David M. Blei},
  year={2021}
}
We develop the sparse VAE for unsupervised representation learning on high-dimensional data. The sparse VAE learns a set of latent factors (representations) which summarize the associations in the observed data features. The underlying model is sparse in that each observed feature (i.e. each dimension of the data) depends on a small subset of the latent factors. As examples, in ratings data each movie is only described by a few genres; in text data each word is only applicable to a few topics… 

References

SHOWING 1-10 OF 48 REFERENCES

Variational Sparse Coding

TLDR
A model based on variational auto-encoders in which interpretation is induced through latent space sparsity with a mixture of Spike and Slab distributions as prior is proposed and provides unique capabilities, such as recovering feature exploitation, synthesising samples that share attributes with a given input object and controlling both discrete and continuous features upon generation.

When Is Unsupervised Disentanglement Possible?

TLDR
The results suggest that in some realistic settings, unsupervised disentanglement is provably possible, without any domain-specific assumptions.

oi-VAE: Output Interpretable VAEs for Nonlinear Group Factor Analysis

TLDR
It is demonstrated that oi-VAE yields meaningful notions of interpretability in the analysis of motion capture and MEG data, and it is shown that in these situations, the regularization inherent to oi -VAE can actually lead to improved generalization and learned generative processes.

Sparse-Coding Variational Auto-Encoders

TLDR
The sparse-coding variational auto-encoder (SVAE) augments the classic sparse coding model with a probabilistic recognition model, parametrized by a deep neural network, and is fit to natural image data under different assumed prior distributions, and shows that it obtains higher test performance than previous fitting methods.

beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework

Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificial

Variational Autoencoders and Nonlinear ICA: A Unifying Framework

TLDR
This work shows that for a broad family of deep latent-variable models, identification of the true joint distribution over observed and latent variables is actually possible up to very simple transformations, thus achieving a principled and powerful form of disentanglement.

Auto-Encoding Variational Bayes

TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

Self-Supervised Learning with Data Augmentations Provably Isolates Content from Style

TLDR
Causal3DIdent, a dataset of high-dimensional, visually complex images with rich causal dependencies, which is used to study the effect of data augmentations performed in practice, and numerical simulations with dependent latent variables are consistent with theory.

Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-VAE

The ability to record activities from hundreds of neurons simultaneously in the brain has placed an increasing demand for developing appropriate statistical techniques to analyze such data. Recently,

Representation Learning: A Review and New Perspectives

TLDR
Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.