Identifiability of deep generative models under mixture priors without auxiliary information

@article{Kivva2022IdentifiabilityOD,
  title={Identifiability of deep generative models under mixture priors without auxiliary information},
  author={Bohdan Kivva and Goutham Rajendran and Pradeep Ravikumar and Bryon Aragam},
  journal={ArXiv},
  year={2022},
  volume={abs/2206.10044}
}
We prove identifiability of a broad class of deep latent variable models that (a) have universal approximation capabilities and (b) are the decoders of variational autoencoders that are commonly used in practice. Unlike existing work, our analysis does not require weak supervision, auxiliary information, or conditioning in the latent space. The models we consider are tightly connected with autoencoder architectures used in practice that leverage mixture priors in the latent space and ReLU/leaky… 

Figures and Tables from this paper

Generalized Identifiability Bounds for Mixture Models with Grouped Samples

It is shown that, if every subset of k mixture components of a mixture model are linearly independent, then that mixture model is identifiable with only (2 m − 1) / ( k −1) samples per group, and that this value cannot be improved.

Linear Causal Disentanglement via Interventions

A generalization of the RQ decomposition of a matrix that replaces the usual orthogonal and upper triangular conditions with analogues depending on a partial order on the rows of the matrix, with partial order determined by a latent causal model is used.

Learning Causal Representations of Single Cells via Sparse Mechanism Shift Modeling

A deep generative model of single-cell gene expression data for which each perturbation is treated as a stochastic intervention targeting an unknown, but sparse, subset of latent variables is proposed.