Recurrent Neural Networks for Time Series Forecasting: Current Status and Future Directions

@article{Hewamalage2019RecurrentNN,
  title={Recurrent Neural Networks for Time Series Forecasting: Current Status and Future Directions},
  author={Hansika Hewamalage and C. Bergmeir and Kasun Bandara},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.00590}
}
LoMEF: A Framework to Produce Local Explanations for Global Model Time Series Forecasts
TLDR
This work proposes a novel local model-agnostic interpretability approach to explain the forecasts from GFMs and evaluates the explanations for the forecasts of the global models in both qualitative and quantitative aspects such as accuracy, fidelity, stability and comprehensibility.
Understanding the Business of Mining Equipment
TLDR
This thesis helps Sandvik with gaining more business insight, which on its turn can improve the performance on the aftermarket, and provides an overview of multiple machine learning models that may be applied to the problem of revenue and sales forecasting.
Global Models for Time Series Forecasting: A Simulation Study
Ensembles of Localised Models for Time Series Forecasting
InterpretTime: a new approach for the systematic evaluation of neural-network interpretability in time series classification
TLDR
A novel approach to evaluate the performance of interpretability methods for time series classification is presented, and a new strategy to assess the similarity between domain experts and machine data interpretation is proposed, providing a systematic interpretability evaluation framework.
The derived demand for advertising expenses and implications on sustainability: a comparative study using deep learning and traditional machine learning methods
TLDR
Long Short Term Memory has been found to be superior to other models in providing highly accurate prediction results for demand forecasting based on advertising expenses in terms of the accuracy of demand forecasting.
MixSeq: Connecting Macroscopic Time Series Forecasting with Microscopic Time Series Data
TLDR
Inspired by the power of Seq2seq and its variants on the modeling of time series data, an end2end mixture model to cluster microscopic time series, where all the components come from a family of Sequ2seq models parameterized by different parameters is proposed.
Factor-Based Framework for Multivariate and Multi-step-ahead Forecasting of Large Scale Time Series
TLDR
This extension to the DFML framework is proposed, a hybrid forecasting technique inspired by the Dynamic Factor Model approach, which improves the capabilities of the DFM approach, by implementing and assessing both linear and non-linear factor estimation techniques as well as model-driven and data-driven factor forecasting techniques.
Talent Demand Forecasting with Attentive Neural Sequential Model
TLDR
This paper proposes a data-driven neural sequential approach, namely Talent Demand Attention Network (TDAN), for forecasting fine-grained talent demand in the recruitment market, and proposes to augment the univariate time series of talent demand at multiple grained levels to extract intrinsic attributes of both companies and job positions.
...
...

References

SHOWING 1-10 OF 108 REFERENCES
Finding Structure in Time
TLDR
A proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory and suggests a method for representing lexical categories and the type/token distinction is developed.
A Multi-Horizon Quantile Recurrent Forecaster
We propose a framework for general probabilistic multi-step time series regression. Specifically, we exploit the expressiveness and temporal nature of Recurrent Neural Networks, the nonparametric
Training Deep Networks without Learning Rates Through Coin Betting
TLDR
This paper proposes a new stochastic gradient descent procedure for deep networks that does not require any learning rate setting and reduces the optimization process to a game of betting on a coin and proposes a learning-rate-free optimal algorithm.
Neural Machine Translation by Jointly Learning to Align and Translate
TLDR
It is conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder-decoder architecture, and it is proposed to extend this by allowing a model to automatically (soft-)search for parts of a source sentence that are relevant to predicting a target word, without having to form these parts as a hard segment explicitly.
Data preprocessing and augmentation for multiple short time series forecasting with recurrent neural networks
  • 36th International Symposium on Forecasting
  • 2016
Forecasting: Principles and Practice, 2nd Edition. OTexts
  • 2018
Time Series Analysis: Forecasting and Control. Forecasting and Control Series
  • 1994
Es rnn slaweksmyl
  • 2018
Forecasting with Exponential Smoothing: The State Space Approach
I. Introduction: Basic concepts.- Getting started. II. Essentials: Linear innovations state space models.- Non-linear and heteroscedastic innovations state space models.- Estimation of innovations
kaggle-web-traffic. Accessed: 2018-11-19
  • URL https://github.com/Arturus/kaggle-web-traffic/blob/master/how_it_works.md
  • 2017
...
...