Corpus ID: 168169891

GRU-ODE-Bayes: Continuous modeling of sporadically-observed time series

@inproceedings{Brouwer2019GRUODEBayesCM,
  title={GRU-ODE-Bayes: Continuous modeling of sporadically-observed time series},
  author={Edward De Brouwer and Jaak Simm and Adam Arany and Yves Moreau},
  booktitle={NeurIPS},
  year={2019}
}
Modeling real-world multidimensional time series can be particularly challenging when these are sporadically observed (i.e., sampling is irregular both in time and across dimensions)-such as in the case of clinical patient data. [...] Key Method We bring these two ideas together in our GRU-ODE-Bayes method.Expand
Neural Jump Ordinary Differential Equation
TLDR
The Neural Jump ODE (NJ-ODE) is introduced that provides a data-driven approach to learn, continuously in time, the conditional expectation of a stochastic process and is defined as a novel training framework, which allows it to prove theoretical convergence guarantees for the first time. Expand
Neural Stochastic Differential Equations with Bayesian Jumps for Marked Temporal Point Process
Many real-world systems evolve according to continuous dynamics and get interrupted by stochastic events i.e. systems that both flow (often described by a differential equation) and jump. If theExpand
Dynamic Gaussian Mixture based Deep Generative Model For Robust Forecasting on Sparse Multivariate Time Series
TLDR
A novel generative model is proposed, which tracks the transition of latent clusters, instead of isolated feature representations, to achieve robust modeling and is characterized by a newly designed dynamic Gaussian mixture distribution, which captures the dynamics of clustering structures. Expand
Latent ODEs for Irregularly-Sampled Time Series
TLDR
This work generalizes RNNs to have continuous-time hidden dynamics defined by ordinary differential equations (ODEs), a model they are called ODE-RNNs, which outperform their RNN-based counterparts on irregularly-sampled data. Expand
NRTSI: Non-Recurrent Time Series Imputation for Irregularly-sampled Data
TLDR
This work views the imputation task from the perspective of permutation equivariant modeling of sets and proposes a novel imputation model called NRTSI without any recurrent modules, which achieves state-of-the-art performance across a wide range of commonly used time series imputation benchmarks. Expand
Explainable Tensorized Neural Ordinary Differential Equations forArbitrary-step Time Series Prediction
TLDR
Extensive experiments show that ETN-ODE can lead to accurate predictions at arbitrary time points while attaining best performance against the baseline methods in standard multi-step time series prediction. Expand
Transformation of ReLU-based recurrent neural networks from discrete-time to continuous-time
TLDR
This work proves three theorems on the mathematical equivalence between the discrete and continuous time formulations under a variety of conditions, and illustrates how to use the mathematical results on different machine learning and nonlinear dynamical systems examples. Expand
Neural ODE Processes
TLDR
By maintaining an adaptive data-dependent distribution over the underlying ODE, this model can successfully capture the dynamics of low-dimensional systems from just a few data-points and scale up to challenging high-dimensional time-series with unknown latent dynamics such as rotating MNIST digits. Expand
Neural Controlled Differential Equations for Online Prediction Tasks
TLDR
This work identifies several theoretical conditions that interpolation schemes for Neural CDEs should satisfy, such as boundedness and uniqueness, and uses these to motivate the introduction of new schemes that address these conditions, offering in particular measurability (for online prediction), and smoothness (for speed). Expand
Multi-Time Attention Networks for Irregularly Sampled Time Series
TLDR
This work is motivated by the analysis of physiological time series data in electronic health records, which are sparse, irregularly sampled, and multivariate, and proposes a new deep learning framework that is called Multi-Time Attention Networks. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 51 REFERENCES
Recurrent Neural Networks for Multivariate Time Series with Missing Values
TLDR
Novel deep learning models are developed based on Gated Recurrent Unit, a state-of-the-art recurrent neural network that takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results. Expand
BRITS: Bidirectional Recurrent Imputation for Time Series
TLDR
BRITS is a novel method based on recurrent neural networks for missing value imputation in time series data that directly learns the missing values in a bidirectional recurrent dynamical system, without any specific assumption. Expand
Sparse Multi-Output Gaussian Processes for Medical Time Series Prediction
TLDR
This work proposes MedGP, a statistical framework that incorporates 24 clinical and lab covariates and supports a rich reference data set from which relationships between observed covariates may be inferred and exploited for high-quality inference of patient state over time. Expand
Latent ODEs for Irregularly-Sampled Time Series
TLDR
This work generalizes RNNs to have continuous-time hidden dynamics defined by ordinary differential equations (ODEs), a model they are called ODE-RNNs, which outperform their RNN-based counterparts on irregularly-sampled data. Expand
Recurrent Marked Temporal Point Processes: Embedding Event History to Vector
TLDR
The Recurrent Marked Temporal Point Process is proposed to simultaneously model the event timings and the markers, and uses a recurrent neural network to automatically learn a representation of influences from the event history, and an efficient stochastic gradient algorithm is developed for learning the model parameters. Expand
The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process
TLDR
This generative model allows past events to influence the future in complex and realistic ways, by conditioning future event intensities on the hidden state of a recurrent neural network that has consumed the stream of past events. Expand
Deep Ensemble Tensor Factorization for Longitudinal Patient Trajectories Classification
TLDR
The combination of generative and ensemble strategies achieves an AUC of over 0.85, and outperforms the SAPS-II mortality score and GRU baselines, and demonstrates the performance of the architecture on an intensive-care case study of in-hospital mortality prediction. Expand
Directly Modeling Missing Data in Sequences with RNNs: Improved Classification of Clinical Time Series
TLDR
This work shows the remarkable ability of RNNs to make effective use of binary indicators to directly model missing data, improving AUC and F1 significantly and evaluating LSTMs, MLPs, and linear models trained on missingness patterns only. Expand
Scalable Joint Models for Reliable Uncertainty-Aware Event Prediction
TLDR
A flexible and scalable joint model based upon sparse multiple-output Gaussian processes that significantly outperforms state-of-the-art techniques in event prediction and can explain highly challenging structure including non-Gaussian noise while scaling to large data. Expand
Deep Kalman Filters
TLDR
A unified algorithm is introduced to efficiently learn a broad spectrum of Kalman filters and investigates the efficacy of temporal generative models for counterfactual inference, and introduces the "Healing MNIST" dataset where long-term structure, noise and actions are applied to sequences of digits. Expand
...
1
2
3
4
5
...