Learning Stochastic Recurrent Networks
@article{Bayer2014LearningSR, title={Learning Stochastic Recurrent Networks}, author={Justin Bayer and Christian Osendorfer}, journal={ArXiv}, year={2014}, volume={abs/1411.7610} }
Leveraging advances in variational inference, we propose to enhance recurrent neural networks with latent variables, resulting in Stochastic Recurrent Networks (STORNs). [] Key Method The model i) can be trained with stochastic gradient methods, ii) allows structured and multi-modal conditionals at each time step, iii) features a reliable estimator of the marginal likelihood and iv) is a generalisation of deterministic recurrent neural networks. We evaluate the method on four polyphonic musical data sets and…
242 Citations
Sequential Neural Models with Stochastic Layers
- Computer ScienceNIPS
- 2016
Stochastic recurrent neural networks are introduced which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural generative model.
Efficient Structured Inference for Stochastic Recurrent Neural Networks
- Computer Science
- 2017
An structured inference algorithm to efficiently learn a class of models that combine recurrent neural networks with state space models, including variants where the emission and transition distributions are modelled by deep neural networks is introduced.
Deep Latent Variable Models for Sequential Data
- Computer Science
- 2018
Stochastic recurrent neural networks which glue a deterministic recurrent neural network and a state space model together to form a stochastic and sequential neural generative model are introduced.
Structured Inference Networks for Nonlinear State Space Models
- Computer ScienceAAAI
- 2017
A unified algorithm is introduced to efficiently learn a broad class of linear and non-linear state space models, including variants where the emission and transition distributions are modeled by deep neural networks.
A Recurrent Latent Variable Model for Sequential Data
- Computer ScienceNIPS
- 2015
It is argued that through the use of high-level latent random variables, the variational RNN (VRNN)1 can model the kind of variability observed in highly structured sequential data such as natural speech.
Variational Recurrent Auto-Encoders
- Computer ScienceICLR
- 2015
A model that combines the strengths of RNNs and SGVB: the Variational Recurrent Auto-Encoder (VRAE) is proposed, which can be used for efficient, large scale unsupervised learning on time series data, mapping the time seriesData to a latent vector representation.
Stochastic Recurrent Neural Network for Multistep Time Series Forecasting
- Computer ScienceICONIP
- 2021
This model preserves the architectural workings of a recurrent neural network for which all relevant information is encapsulated in its hidden states, and this flexibility allows the model to be easily integrated into any deep architecture for sequential modelling.
Time Series Forecasting Based on Variational Recurrent Model
- Computer Science
- 2017
The proposed variational recurrent model has both deterministic hidden states and stochastic latent variables while previous RNN methods only consider deterministic states and has better performance on real-worl data.
Stochastic Sequential Neural Networks with Structured Inference
- Computer ScienceArXiv
- 2017
This work proposes a structured and stochastic sequential neural network, which models both the long-term dependencies via recurrent neural networks and the uncertainty in the segmentation and labels via discrete random variables, and presents a bi-directional inference network.
Recurrent Neural Networks with Stochastic Layers for Acoustic Novelty Detection
- Computer ScienceICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2019
This model is robust, highly unsupervised, end-to-end and requires minimum preprocessing, feature engineering or hyperparameter tuning and outperforms the state-of-the-art acoustic novelty detectors.
References
SHOWING 1-10 OF 33 REFERENCES
Training Neural Networks with Implicit Variance
- Computer ScienceICONIP
- 2013
The method is evaluated on a synthetic and a inverse robot dynamics task, yielding superior performance to plain neural networks, Gaussian processes and LWPR in terms of likelihood.
Stochastic Back-propagation and Variational Inference in Deep Latent Gaussian Models
- Computer ScienceArXiv
- 2014
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and…
Stochastic Backpropagation and Approximate Inference in Deep Generative Models
- Computer ScienceICML
- 2014
We marry ideas from deep neural networks and approximate Bayesian inference to derive a generalised class of deep, directed generative models, endowed with a new algorithm for scalable inference and…
Learning Recurrent Neural Networks with Hessian-Free Optimization
- Computer ScienceICML
- 2011
This work solves the long-outstanding problem of how to effectively train recurrent neural networks on complex and difficult sequence modeling problems which may contain long-term data dependencies and offers a new interpretation of the generalized Gauss-Newton matrix of Schraudolph which is used within the HF approach of Martens.
On Fast Dropout and its Applicability to Recurrent Networks
- Computer ScienceICLR
- 2014
This paper analyzes fast dropout, a recent regularization method for generalized linear models and neural networks from a back-propagation inspired perspective and shows that it implements a quadratic form of an adaptive, per-parameter regularizer, which rewards large weights in the light of underfitting, penalizes them for overconfident predictions and vanishes at minima of an unregularized training loss.
Auto-Encoding Variational Bayes
- Computer ScienceICLR
- 2014
A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.
The Recurrent Temporal Restricted Boltzmann Machine
- Computer ScienceNIPS
- 2008
The Recurrent TRBM is introduced, which is a very slight modification of the TRBM for which exact inference is very easy and exact gradient learning is almost tractable.
Advances in optimizing recurrent networks
- Computer Science2013 IEEE International Conference on Acoustics, Speech and Signal Processing
- 2013
Experiments reported here evaluate the use of clipping gradients, spanning longer time ranges with leaky integration, advanced momentum techniques, using more powerful output probability models, and encouraging sparser gradients to help symmetry breaking and credit assignment.
How to Construct Deep Recurrent Neural Networks
- Computer ScienceICLR
- 2014
Two novel architectures of a deep RNN are proposed which are orthogonal to an earlier attempt of stacking multiple recurrent layers to build aDeep RNN, and an alternative interpretation is provided using a novel framework based on neural operators.
High-dimensional sequence transduction
- Computer Science2013 IEEE International Conference on Acoustics, Speech and Signal Processing
- 2013
A probabilistic model based on a recurrent neural network that is able to learn realistic output distributions given the input is introduced and an efficient algorithm to search for the global mode of that distribution is devised.