PIETS: Parallelised Irregularity Encoders for Forecasting with Heterogeneous Time-Series

@article{Abushaqra2021PIETSPI,
  title={PIETS: Parallelised Irregularity Encoders for Forecasting with Heterogeneous Time-Series},
  author={Futoon M. Abushaqra and Hao Xue and Yongli Ren and Flora D. Salim},
  journal={2021 IEEE International Conference on Data Mining (ICDM)},
  year={2021},
  pages={976-981}
}
Heterogeneity and irregularity of multi-source data sets present a significant challenge to time-series analysis. In the literature, the fusion of multi-source time-series has been achieved either by using ensemble learning models which ignore temporal patterns and correlation within features or by defining a fixed-size window to select specific parts of the data sets. On the other hand, many studies have shown major improvement to handle the irregularity of time-series, yet none of these… 

Figures and Tables from this paper

CrossPyramid: Neural Ordinary Differential Equations Architecture for Partially-observed Time-series

CrossPyramid is introduced, a novel ODE-based model that aims to enhance the generalizability of sequences representation and does not rely only on the hidden state from the last observed value; it also considers ODE latent representations learned from other samples.

Beyond Just Vision: A Review on Self-Supervised Representation Learning on Multimodal and Temporal Data

This work aims to provide the first comprehensive review of multimodal self-supervised learning methods for temporal data, and introduces a generic pipeline by defining the key components of a SSRL framework.

References

SHOWING 1-10 OF 37 REFERENCES

Multi-Time Attention Networks for Irregularly Sampled Time Series

This work is motivated by the analysis of physiological time series data in electronic health records, which are sparse, irregularly sampled, and multivariate, and proposes a new deep learning framework that is called Multi-Time Attention Networks.

Recurrent Neural Networks for Multivariate Time Series with Missing Values

Novel deep learning models are developed based on Gated Recurrent Unit, a state-of-the-art recurrent neural network that takes two representations of missing patterns, i.e., masking and time interval, and effectively incorporates them into a deep model architecture so that it not only captures the long-term temporal dependencies in time series, but also utilizes the missing patterns to achieve better prediction results.

Time-series forecasting with deep learning: a survey

This article surveys common encoder and decoder designs used in both one-step-ahead and multi-horizon time-series forecasting—describing how temporal information is incorporated into predictions by each model.

BRITS: Bidirectional Recurrent Imputation for Time Series

BRITS is a novel method based on recurrent neural networks for missing value imputation in time series data that directly learns the missing values in a bidirectional recurrent dynamical system, without any specific assumption.

Set Functions for Time Series

This paper proposes a novel approach for classifying irregularly-sampled time series with unaligned measurements, focusing on high scalability and data efficiency, and is based on recent advances in differentiable set function learning, extremely parallelizable with a beneficial memory footprint.

A scalable end-to-end Gaussian process adapter for irregularly sampled time series classification

This work proposes an uncertainty-aware classification framework based on a special computational layer known as the Gaussian process adapter that can connect irregularly sampled time series data to any black-box classifier learnable using gradient descent.

Multi-resolution Networks For Flexible Irregular Time Series Modeling (Multi-FIT)

A unified model named Multi-resolution Flexible Irregular Time series Network (Multi-FIT) is proposed, which improves upon the state-of-the-art models for three predictive tasks, including the forecasting of patient survival.

GRU-ODE-Bayes: Continuous modeling of sporadically-observed time series

Empirical evaluation shows that the proposed GRU-ODE-Bayes method outperforms the state of the art on both synthetic data and real-world data with applications in healthcare and climate forecast and the continuity prior is shown to be well suited for low number of samples settings.

MIDIA: exploring denoising autoencoders for missing data imputation

A new deep learning model called MIssing Data Imputation denoising Autoencoder (MIDIA) is developed that effectively imputes the MVs in a given dataset by exploring non-linear correlations between missing values and non-missing values.

Directly Modeling Missing Data in Sequences with RNNs: Improved Classification of Clinical Time Series

This work shows the remarkable ability of RNNs to make effective use of binary indicators to directly model missing data, improving AUC and F1 significantly and evaluating LSTMs, MLPs, and linear models trained on missingness patterns only.