• Corpus ID: 61153677

# Contrastive Variational Autoencoder Enhances Salient Features

@article{Abid2019ContrastiveVA,
title={Contrastive Variational Autoencoder Enhances Salient Features},
author={Abubakar Abid and James Y. Zou},
journal={ArXiv},
year={2019},
volume={abs/1902.04601}
}
• Published 12 February 2019
• Computer Science
• ArXiv
Variational autoencoders are powerful algorithms for identifying dominant latent structure in a single dataset. In many applications, however, we are interested in modeling latent structure and variation that are enriched in a target dataset compared to some background---e.g. enriched in patients compared to the general population. Contrastive learning is a principled framework to capture such enriched variation between the target and background, but state-of-the-art contrastive methods are…

## Figures from this paper

• Computer Science
• 2019
This work presents a framework for modeling multiple data sets which come from differing distributions but which share some common latent structure, and successfully models differing data populations while explicitly encouraging the isolation of the shared and private latent factors.
• Computer Science
AISTATS
• 2022
The moment matching contrastive VAE (MM-cVAE), a reformulation of the VAE for CA that uses the maximum mean discrepancy to explicitly enforce two crucial latent variable constraints underlying CA, is proposed and found that it outperforms the previous state-ofthe-art both qualitatively and on a set of quantitative metrics.
• Computer Science
ArXiv
• 2019
Inspired by the popular noise contrastive estimation algorithm, this work proposes NC-VAE where the encoder discriminates between the latent codes of real data and of some artificially generated noise, in addition to encouraging good data reconstruction abilities.
• Computer Science
• 2019
Variational autoencoders, a combination of a non-linear latent variable model and an amortized inference scheme, is a popular method for recovering latent structure and has received considerable attention in recent years.
• Computer Science
2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2020
An evaluation of NestedVAE on both domain and attribute invariance, change detection, and learning common factors for the prediction of biological sex demonstrates that Nested VAE significantly outperforms alternative methods.
• Computer Science
IEEE Transactions on Multimedia
• 2022
This work proposes a Semantic Regularized class-conditional Generative Adversarial Network, which is referred to as SReGAN, and incorporates an additional discriminator and classifier into the generator-discriminator minimax game.
• Computer Science
IEEE Transactions on Signal Processing
• 2022
Deep contrastive (Dc) PCA is advocated for nonlinear contrastive data analytics, which leverages the power of deep neural networks to explore the hidden nonlinear relationships in the datasets and extract the desired contrastive features.
• Computer Science, Biology
Molecular systems biology
• 2020
This review provides a brief overview of the technical notions behind generative models and their implementation with deep learning techniques and describes several different ways in which these models can be utilized in practice, using several recent applications in molecular biology as examples.
• Computer Science
ArXiv
• 2020
This work introduces a novel approach called contrastive network representation learning (cNRL), which embeds network nodes into a low-dimensional space that reveals the uniqueness of one network compared to another, and designs a method that offers interpretability in the learned results.
• Biology
bioRxiv
• 2021
This work introduces Contrastive Variational Inference (contrastiveVI), a framework for analyzing treatment-control scRNA-seq datasets that explicitly disentangles the data into shared and treatment-specific latent variables.

## References

SHOWING 1-10 OF 24 REFERENCES

• Computer Science
NIPS
• 2016
A new inference model is proposed, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network.
• Computer Science
NIPS
• 2013
This paper formalizes this notion of contrastive learning for mixture models, and develops spectral algorithms for inferring mixture components specific to a foreground data set when contrasted with a background data set.
• Computer Science
NeurIPS
• 2018
We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our $\beta$-TCVAE (Total Correlation
• Computer Science
AAAI
• 2019
This work presents a probabilistic model for dimensionality reduction to discover signal that is enriched in the target dataset relative to the background dataset and demonstrates the application of the technique to de-noising, feature selection, and subgroup discovery settings.
This tutorial introduces the intuitions behind VAEs, explains the mathematics behind them, and describes some empirical behavior.
• Computer Science
ICLR
• 2014
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
• Computer Science
NeurIPS
• 2018
This paper achieves better results for translation on challenging datasets as well as for cross-domain retrieval on realistic datasets and compares the model to the state-of-the-art in multi-modal image translation.
• Computer Science
ICML
• 2016
This paper develops the general framework of Rich Component Analysis (RCA) to model settings where the observations from different views are driven by different sets of latent components, and each component can be a complex, high-dimensional distribution.
• Computer Science
2015 IEEE International Conference on Computer Vision (ICCV)
• 2015
A novel deep learning framework for attribute prediction in the wild that cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently.
• Computer Science
Nature Communications
• 2018
This paper proposes a method, contrastive principal component analysis (cPCA), which identifies low-dimensional structures that are enriched in a dataset relative to comparison data and enables visualization of dataset-specific patterns.