• Corpus ID: 61153677

Contrastive Variational Autoencoder Enhances Salient Features

@article{Abid2019ContrastiveVA,
  title={Contrastive Variational Autoencoder Enhances Salient Features},
  author={Abubakar Abid and James Y. Zou},
  journal={ArXiv},
  year={2019},
  volume={abs/1902.04601}
}
Variational autoencoders are powerful algorithms for identifying dominant latent structure in a single dataset. In many applications, however, we are interested in modeling latent structure and variation that are enriched in a target dataset compared to some background---e.g. enriched in patients compared to the general population. Contrastive learning is a principled framework to capture such enriched variation between the target and background, but state-of-the-art contrastive methods are… 

Figures from this paper

Isolating Latent Structure with Cross-population Variational Autoencoders

This work presents a framework for modeling multiple data sets which come from differing distributions but which share some common latent structure, and successfully models differing data populations while explicitly encouraging the isolation of the shared and private latent factors.

Moment Matching Deep Contrastive Latent Variable Models

The moment matching contrastive VAE (MM-cVAE), a reformulation of the VAE for CA that uses the maximum mean discrepancy to explicitly enforce two crucial latent variable constraints underlying CA, is proposed and found that it outperforms the previous state-ofthe-art both qualitatively and on a set of quantitative metrics.

Noise Contrastive Variational Autoencoders

Inspired by the popular noise contrastive estimation algorithm, this work proposes NC-VAE where the encoder discriminates between the latent codes of real data and of some artificially generated noise, in addition to encouraging good data reconstruction abilities.

Cross-population Variational Autoencoders

Variational autoencoders, a combination of a non-linear latent variable model and an amortized inference scheme, is a popular method for recovering latent structure and has received considerable attention in recent years.

NestedVAE: Isolating Common Factors via Weak Supervision

An evaluation of NestedVAE on both domain and attribute invariance, change detection, and learning common factors for the prediction of biological sex demonstrates that Nested VAE significantly outperforms alternative methods.

Semantic Regularized Class-Conditional GANs for Semi-Supervised Fine-Grained Image Synthesis

This work proposes a Semantic Regularized class-conditional Generative Adversarial Network, which is referred to as SReGAN, and incorporates an additional discriminator and classifier into the generator-discriminator minimax game.

Deep Contrastive Principal Component Analysis Adaptive to Nonlinear Data

Deep contrastive (Dc) PCA is advocated for nonlinear contrastive data analytics, which leverages the power of deep neural networks to explore the hidden nonlinear relationships in the datasets and extract the desired contrastive features.

Enhancing scientific discoveries in molecular biology with deep generative models

This review provides a brief overview of the technical notions behind generative models and their implementation with deep learning techniques and describes several different ways in which these models can be utilized in practice, using several recent applications in molecular biology as examples.

Interpretable Contrastive Learning for Networks

This work introduces a novel approach called contrastive network representation learning (cNRL), which embeds network nodes into a low-dimensional space that reveals the uniqueness of one network compared to another, and designs a method that offers interpretability in the learned results.

Isolating salient variations of interest in single-cell data with contrastiveVI

This work introduces Contrastive Variational Inference (contrastiveVI), a framework for analyzing treatment-control scRNA-seq datasets that explicitly disentangles the data into shared and treatment-specific latent variables.

References

SHOWING 1-10 OF 24 REFERENCES

Ladder Variational Autoencoders

A new inference model is proposed, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network.

Contrastive Learning Using Spectral Methods

This paper formalizes this notion of contrastive learning for mixture models, and develops spectral algorithms for inferring mixture components specific to a foreground data set when contrasted with a background data set.

Isolating Sources of Disentanglement in Variational Autoencoders

We decompose the evidence lower bound to show the existence of a term measuring the total correlation between latent variables. We use this to motivate our $\beta$-TCVAE (Total Correlation

Unsupervised learning with contrastive latent variable models

This work presents a probabilistic model for dimensionality reduction to discover signal that is enriched in the target dataset relative to the background dataset and demonstrates the application of the technique to de-noising, feature selection, and subgroup discovery settings.

Tutorial on Variational Autoencoders

This tutorial introduces the intuitions behind VAEs, explains the mathematics behind them, and describes some empirical behavior.

Auto-Encoding Variational Bayes

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

Image-to-image translation for cross-domain disentanglement

This paper achieves better results for translation on challenging datasets as well as for cross-domain retrieval on realistic datasets and compares the model to the state-of-the-art in multi-modal image translation.

Rich Component Analysis

This paper develops the general framework of Rich Component Analysis (RCA) to model settings where the observations from different views are driven by different sets of latent components, and each component can be a complex, high-dimensional distribution.

Deep Learning Face Attributes in the Wild

A novel deep learning framework for attribute prediction in the wild that cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently.

Exploring patterns enriched in a dataset with contrastive principal component analysis

This paper proposes a method, contrastive principal component analysis (cPCA), which identifies low-dimensional structures that are enriched in a dataset relative to comparison data and enables visualization of dataset-specific patterns.