• Corpus ID: 56177655

A Novel Variational Autoencoder with Applications to Generative Modelling, Classification, and Ordinal Regression

@article{Jaskari2018ANV,
  title={A Novel Variational Autoencoder with Applications to Generative Modelling, Classification, and Ordinal Regression},
  author={Joel Jaskari and Jyri J. Kivinen},
  journal={ArXiv},
  year={2018},
  volume={abs/1812.07352}
}
We develop a novel probabilistic generative model based on the variational autoencoder approach. Notable aspects of our architecture are: a novel way of specifying the latent variables prior, and the introduction of an ordinality enforcing unit. We describe how to do supervised, unsupervised and semi-supervised learning, and nominal and ordinal classification, with the model. We analyze generative properties of the approach, and the classification effectiveness under nominal and ordinal… 

Figures and Tables from this paper

Recursively Conditional Gaussian for Ordinal Unsupervised Domain Adaptation

A recursively conditional Gaussian (RCG) set is adapted for ordered constraint modeling, which admits a tractable joint distribution prior and is able to control the density of content vector that violates the poset constraints by a simple "three-sigma rule".

Ordinal-Content VAE: Isolating Ordinal-Valued Content Factors in Deep Latent Variable Models

A novel extension of VAE that imposes a partially ordered set (poset) structure in the content latent space, while simultaneously making it aligned with the ordinal content values, and demonstrates significant improvements in content-style separation over previous non-ordinal approaches.

Auto-encoding graph-valued data with applications to brain connectomes

This article develops a nonlinear latent factor model for summarizing the brain graph in both unsupervised and supervised settings that builds on methods for hierarchical modeling of replicated graph data, as well as variational auto-encoders that use neural networks for dimensionality reduction.

Anomaly Detection of Wind Turbine Time Series using Variational Recurrent Autoencoders

This work investigates the problem of ice accumulation in wind turbines by framing it as anomaly detection of multi-variate time series, using a Variational Recurrent Autoencoder (VRAE) and unsupervised clustering algorithms to classify the learned representations as normal (no ice accumulated) or abnormal (ice accumulated).

Controllable content generation

The goal is to give content creators the power to decide the properties of the content they would like to control, to give them tangible control over these properties and let generative models fill the gaps.

References

SHOWING 1-10 OF 22 REFERENCES

Semi-supervised Learning with Deep Generative Models

It is shown that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.

Ladder Variational Autoencoders

A new inference model is proposed, the Ladder Variational Autoencoder, that recursively corrects the generative distribution by a data dependent approximate likelihood in a process resembling the recently proposed Ladder Network.

Auxiliary Deep Generative Models

This work extends deep generative models with auxiliary variables which improves the variational approximation and proposes a model with two stochastic layers and skip connections which shows state-of-the-art performance within semi-supervised learning on MNIST, SVHN and NORB datasets.

Auto-Encoding Variational Bayes

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

A simple squared-error reformulation for ordinal classification

In this paper, we explore ordinal classification (in the context of deep neural networks) through a simple modification of the squared error loss which not only allows it to not only be sensitive to

Importance Weighted Autoencoders

The importance weighted autoencoder (IWAE), a generative model with the same architecture as the VAE, but which uses a strictly tighter log-likelihood lower bound derived from importance weighting, shows empirically that IWAEs learn richer latent space representations than VAEs, leading to improved test log- likelihood on density estimation benchmarks.

Variational Gaussian Process Auto-Encoder for Ordinal Prediction of Facial Action Units

A novel Gaussian process(GP) auto-encoder modeling approach is proposed, which introduces GP encoders to project multiple observed features onto a latent space, while GP decoders are responsible for reconstructing the original features.

Learning Structured Output Representation using Deep Conditional Generative Models

A deep conditional generative model for structured output prediction using Gaussian latent variables is developed, trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using Stochastic feed-forward inference.

Stochastic Backpropagation and Approximate Inference in Deep Generative Models

We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and

Adam: A Method for Stochastic Optimization

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.