Corpus ID: 236428995

Invariance-based Multi-Clustering of Latent Space Embeddings for Equivariant Learning

@article{Bajaj2021InvariancebasedMO,
  title={Invariance-based Multi-Clustering of Latent Space Embeddings for Equivariant Learning},
  author={Chandrajit L. Bajaj and Avik Roy and Haoran Zhang},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.11717}
}
  • C. Bajaj, Avik Roy, Haoran Zhang
  • Published 2021
  • Computer Science, Mathematics
  • ArXiv
Variational Autoencoders (VAEs) have been shown to be remarkably effective in recovering model latent spaces for several computer vision tasks. However, currently trained VAEs, for a number of reasons, seem to fall short in learning invariant and equivariant clusters in latent space. Our work focuses on providing solutions to this problem and presents an approach to disentangle equivariance feature maps in a Lie group manifold by enforcing deep, group-invariant learning. Simultaneously… Expand

Figures from this paper

References

SHOWING 1-10 OF 38 REFERENCES
Learning Latent Subspaces in Variational Autoencoders
TLDR
A VAE-based generative model is proposed which is capable of extracting features correlated to binary labels in the data and structuring it in a latent subspace which is easy to interpret and demonstrate the utility of the learned representations for attribute manipulation tasks on both the Toronto Face and CelebA datasets. Expand
Deep Clustering With Variational Autoencoder
TLDR
A probabilistic approach is proposed to generalize Song's approach, such that Euclidean distance in the latent space is now represented by KL divergence, and as a consequence of this generalization the authors can now use probability distributions as inputs rather than points inThe latent space. Expand
Variational Deep Embedding: An Unsupervised and Generative Approach to Clustering
TLDR
Variational Deep Embedding (VaDE) is proposed, a novel unsupervised generative clustering approach within the framework of Variational Auto-Encoder (VAE), which shows its capability of generating highly realistic samples for any specified cluster, without using supervised information during training. Expand
beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework
Learning an interpretable factorised representation of the independent data generative factors of the world without supervision is an important precursor for the development of artificialExpand
Learning to Disentangle Factors of Variation with Manifold Interaction
TLDR
A higher-order Boltzmann machine that incorporates multiplicative interactions among groups of hidden units that each learn to encode a distinct factor of variation is proposed and achieves state-of-the-art emotion recognition and face verification performance on the Toronto Face Database. Expand
Deep Unsupervised Clustering with Gaussian Mixture Variational Autoencoders
TLDR
It is shown that a heuristic called minimum information constraint that has been shown to mitigate this effect in VAEs can also be applied to improve unsupervised clustering performance with this variant of the variational autoencoder model with a Gaussian mixture as a prior distribution. Expand
VAE with a VampPrior
TLDR
This paper proposes to extend the variational auto-encoder (VAE) framework with a new type of prior called "Variational Mixture of Posteriors" prior, or VampPrior for short, which consists of a mixture distribution with components given by variational posteriors conditioned on learnable pseudo-inputs. Expand
AVT: Unsupervised Learning of Transformation Equivariant Representations by Autoencoding Variational Transformations
TLDR
This work presents a novel principled method by Autoencoding Variational Transformations (AVT) to train the networks by maximizing the mutual information between the transformations and representations, and demonstrates the proposed AVT model sets a new record for the performances on unsupervised tasks. Expand
Explicitly disentangling image content from translation and rotation with spatial-VAE
TLDR
This work proposes a method for explicitly disentangling image rotation and translation from other unstructured latent factors in a variational autoencoder (VAE) framework by formulating the generative model as a function of the spatial coordinate, which makes the reconstruction error differentiable with respect to latent translation and rotation parameters. Expand
Equivariant Transformer Networks
TLDR
Equivariant Transformers (ETs), a family of differentiable image-to-image mappings that improve the robustness of models towards pre-defined continuous transformation groups in several parameters are proposed. Expand
...
1
2
3
4
...