Corpus ID: 237571900

Neural forecasting at scale

@article{Chatigny2021NeuralFA,
  title={Neural forecasting at scale},
  author={Philippe Chatigny and Boris N. Oreshkin and Jean-Marc Patenaude and and Shengrui Wang},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.09705}
}
We study the problem of efficiently scaling ensemble-based deep neural networks for time series (TS) forecasting on a large set of time series. Current state-of-the-art deep ensemble models have high memory and computational requirements, hampering their use to forecast millions of TS in practical scenarios. We propose N-BEATS(P), a global multivariate variant of the N-BEATS model designed to allow simultaneous training of multiple univariate TS forecasting models. Our model addresses the… Expand

References

SHOWING 1-10 OF 84 REFERENCES
N-BEATS: Neural basis expansion analysis for interpretable time series forecasting
TLDR
The proposed deep neural architecture based on backward and forward residual links and a very deep stack of fully-connected layers has a number of desirable properties, being interpretable, applicable without modification to a wide array of target domains, and fast to train. Expand
Deep Factors for Forecasting
TLDR
A hybrid model that incorporates the benefits of both classical and deep neural networks is proposed, which is data-driven and scalable via a latent, global, deep component, and handles uncertainty through a local classical model. Expand
DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks
TLDR
DeepAR is proposed, a methodology for producing accurate probabilistic forecasts, based on training an auto regressive recurrent network model on a large number of related time series, with accuracy improvements of around 15% compared to state-of-the-art methods. Expand
Deep State Space Models for Time Series Forecasting
TLDR
A novel approach to probabilistic time series forecasting that combines state space models with deep learning by parametrizing a per-time-series linear state space model with a jointly-learned recurrent neural network, which compares favorably to the state-of-the-art. Expand
Shape and Time Distortion Loss for Training Deep Time Series Forecasting Models
TLDR
DILATE (DIstortion Loss including shApe and TimE), a new objective function for training deep neural networks that aims at accurately predicting sudden changes, is introduced, and explicitly incorporates two terms supporting precise shape and temporal change detection. Expand
Adversarial Sparse Transformer for Time Series Forecasting
TLDR
Adversarial Sparse Transformer (AST) is proposed, a new time series forecasting model based on Generative Adversarial Networks (GANs), which adopts a SparseTransformer as the generator to learn a sparse attention map forTime series forecasting, and uses a discriminator to improve the prediction performance at a sequence level. Expand
Deep Transformer Models for Time Series Forecasting: The Influenza Prevalence Case
TLDR
This work developed a novel method that employs Transformer-based machine learning models to forecast time series data and shows that the forecasting results produced are favorably comparable to the state-of-the-art. Expand
Probabilistic Demand Forecasting at Scale
TLDR
A platform built on large-scale, data-centric machine learning approaches, whose particular focus is demand forecasting in retail, that enables the training and application of probabilistic demand forecasting models, and provides convenient abstractions and support functionality for forecasting problems. Expand
Temporal Regularized Matrix Factorization for High-dimensional Time Series Prediction
TLDR
This paper develops novel regularization schemes and uses scalable matrix factorization methods that are eminently suited for high-dimensional time series data that has many missing values, and makes interesting connections to graph regularization methods in the context of learning the dependencies in an autoregressive framework. Expand
Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting
TLDR
First, convolutional self-attention is proposed by producing queries and keys with causal convolution so that local context can be better incorporated into attention mechanism, and LogSparse Transformer is proposed, improving forecasting accuracy for time series with fine granularity and strong long-term dependencies under constrained memory budget. Expand
...
1
2
3
4
5
...