• Corpus ID: 244729652

NeuralProphet: Explainable Forecasting at Scale

  title={NeuralProphet: Explainable Forecasting at Scale},
  author={Oskar Triebe and Hansika Hewamalage and Polina Pilyugina and Nikolay Pavlovich Laptev and Christoph Bergmeir and Ram Rajagopal},
We introduce NeuralProphet, a successor to Facebook Prophet, which set an industry standard for explainable, scalable, and user-friendly forecasting frameworks. With the proliferation of time series data, explainable forecasting remains a challenging task for business and operational decision making. Hybrid solutions are needed to bridge the gap between interpretable classical methods and scalable deep learning models. We view Prophet as a precursor to such a solution. However, Prophet lacks… 


AR-Net: A simple Auto-Regressive Neural Network for time-series
A new framework for time-series modeling that combines the best of traditional statistical models and neural networks is presented, and it is shown that AR-Net is as interpretable as Classic-AR but also scales to long-range dependencies.
Cyclical Learning Rates for Training Neural Networks
  • Leslie N. Smith
  • Computer Science
    2017 IEEE Winter Conference on Applications of Computer Vision (WACV)
  • 2017
A new method for setting the learning rate, named cyclical learning rates, is described, which practically eliminates the need to experimentally find the best values and schedule for the global learning rates.
Super-convergence: very fast training of neural networks using large learning rates
A phenomenon is described, where neural networks can be trained an order of magnitude faster than with standard training methods, and it is shown that super-convergence provides a greater boost in performance relative to standard training when the amount of labeled training data is limited.
Decoupled Weight Decay Regularization
This work proposes a simple modification to recover the original formulation of weight decay regularization by decoupling the weight decay from the optimization steps taken w.r.t. the loss function, and provides empirical evidence that this modification substantially improves Adam's generalization performance.
Monash Time Series Forecasting Archive
This paper presents a comprehensive forecasting archive containing 25 publicly available time series datasets from varied domains, with different characteristics in terms of frequency, series lengths, and inclusion of missing values, for the benefit of researchers using the archive to benchmark their forecasting algorithms.
Statistical and Machine Learning forecasting methods: Concerns and ways forward
It is found that the post-sample accuracy of popular ML methods are dominated across both accuracy measures used and for all forecasting horizons examined, and that their computational requirements are considerably greater than those of statistical methods.
PyTorch: An Imperative Style, High-Performance Deep Learning Library
This paper details the principles that drove the implementation of PyTorch and how they are reflected in its architecture, and explains how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance.
Out-of-sample tests of forecasting accuracy: an analysis and review
The structure of out-of-sample tests is explained, guidelines for implementing these tests are provided, and the adequacy of out of-offer tests in forecasting software is evaluated.
Structural Time Series Models
1 Trend and Cycle Decomposition y t = t + t where y t is an n 1 vector and t and t represent trend and cycle components respectively. This decomposition into components is not unique. Beveridge and
Another look at measures of forecast accuracy
We discuss and compare measures of accuracy of univariate time series forecasts. The methods used in the M-competition and the M3-competition, and many of the measures recommended by previous authors