• Corpus ID: 56177655

A Novel Variational Autoencoder with Applications to Generative Modelling, Classification, and Ordinal Regression

@article{Jaskari2018ANV,
  title={A Novel Variational Autoencoder with Applications to Generative Modelling, Classification, and Ordinal Regression},
  author={Joel Jaskari and Jyri J. Kivinen},
  journal={ArXiv},
  year={2018},
  volume={abs/1812.07352}
}
We develop a novel probabilistic generative model based on the variational autoencoder approach. Notable aspects of our architecture are: a novel way of specifying the latent variables prior, and the introduction of an ordinality enforcing unit. We describe how to do supervised, unsupervised and semi-supervised learning, and nominal and ordinal classification, with the model. We analyze generative properties of the approach, and the classification effectiveness under nominal and ordinal… 

Figures and Tables from this paper

Recursively Conditional Gaussian for Ordinal Unsupervised Domain Adaptation

TLDR
A recursively conditional Gaussian (RCG) set is adapted for ordered constraint modeling, which admits a tractable joint distribution prior and is able to control the density of content vector that violates the poset constraints by a simple "three-sigma rule".

Ordinal-Content VAE: Isolating Ordinal-Valued Content Factors in Deep Latent Variable Models

TLDR
A novel extension of VAE that imposes a partially ordered set (poset) structure in the content latent space, while simultaneously making it aligned with the ordinal content values, and demonstrates significant improvements in content-style separation over previous non-ordinal approaches.

Auto-encoding graph-valued data with applications to brain connectomes

TLDR
This article develops a nonlinear latent factor model for summarizing the brain graph in both unsupervised and supervised settings that builds on methods for hierarchical modeling of replicated graph data, as well as variational auto-encoders that use neural networks for dimensionality reduction.

Anomaly Detection of Wind Turbine Time Series using Variational Recurrent Autoencoders

TLDR
This work investigates the problem of ice accumulation in wind turbines by framing it as anomaly detection of multi-variate time series, using a Variational Recurrent Autoencoder (VRAE) and unsupervised clustering algorithms to classify the learned representations as normal (no ice accumulated) or abnormal (ice accumulated).

Controllable content generation

TLDR
The goal is to give content creators the power to decide the properties of the content they would like to control, to give them tangible control over these properties and let generative models fill the gaps.

References

SHOWING 1-10 OF 22 REFERENCES

Semi-supervised Learning with Deep Generative Models

TLDR
It is shown that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.

Ladder Variational Autoencoders

TLDR
A new inference model is proposed, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network.

Auxiliary Deep Generative Models

TLDR
This work extends deep generative models with auxiliary variables which improves the variational approximation and proposes a model with two stochastic layers and skip connections which shows state-of-the-art performance within semi-supervised learning on MNIST, SVHN and NORB datasets.

Auto-Encoding Variational Bayes

TLDR
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

A simple squared-error reformulation for ordinal classification

In this paper, we explore ordinal classification (in the context of deep neural networks) through a simple modification of the squared error loss which not only allows it to not only be sensitive to

Importance Weighted Autoencoders

TLDR
The importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting, shows empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log- likelihood on density estimation benchmarks.

Variational Gaussian Process Auto-Encoder for Ordinal Prediction of Facial Action Units

TLDR
A novel Gaussian process(GP) auto-encoder modeling approach is proposed, which introduces GP encoders to project multiple observed features onto a latent space, while GP decoders are responsible for reconstructing the original features.

Learning Structured Output Representation using Deep Conditional Generative Models

TLDR
A deep conditional generative model for structured output prediction using Gaussian latent variables is developed, trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using Stochastic feed-forward inference.

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and

Semi-supervised Gaussian Process Ordinal Regression

TLDR
Experimental results demonstrate that the proposed GP based approach makes effective use of the unlabeled data to give better generalization performance than the supervised approach, and is a useful approach for probabilistic semi-supervised ordinal regression problem.