Deep and Confident Prediction for Time Series at Uber

  title={Deep and Confident Prediction for Time Series at Uber},
  author={Lingxue Zhu and Nikolay Pavlovich Laptev},
  journal={2017 IEEE International Conference on Data Mining Workshops (ICDMW)},
  • Lingxue Zhu, N. Laptev
  • Published 6 September 2017
  • Computer Science
  • 2017 IEEE International Conference on Data Mining Workshops (ICDMW)
Reliable uncertainty estimation for time series prediction is critical in many fields, including physics, biology, and manufacturing. At Uber, probabilistic time series forecasting is used for robust prediction of number of trips during special events, driver incentive allocation, as well as real-time anomaly detection across millions of metrics. Classical time series models are often used in conjunction with a probabilistic formulation for uncertainty estimation. However, such models are hard… 

Figures and Tables from this paper

Conformal Time-series Forecasting

This work extends the inductive conformal prediction framework to the time-series forecasting setup, and proposes a lightweight uncertainty estimation procedure to address the above limitations and provides uncertainty intervals with theoretical guarantees on frequentist coverage for any multi-horizon forecast predictor and any dataset.

Quantifying Uncertainty in Deep Spatiotemporal Forecasting

This paper describes two types of spatiotemporal forecasting problems: regular grid-based and graph-based, and analyzes UQ methods from both the Bayesian and the frequentist point of view, casting in a unified framework via statistical decision theory.

Applying SVGD to Bayesian Neural Networks for Cyclical Time-Series Prediction and Inference

A regression-based BNN model is proposed to predict spatiotemporal quantities like hourly rider demand with calibrated uncertainties, capable of producing time series predictions as well as measures of uncertainty surrounding the predictions.

Fast Memory-efficient Extreme Events Prediction in Complex Time series

This paper proposes a generic memory-efficient framework for realtime stochastic extreme events prediction in complex time series systems such as intrusion detection, Internet of Things (IoT), social

Addressing model uncertainty in probabilistic forecasting using Monte Carlo dropout

This work proposes addressing the model uncertainty problem using Monte Carlo dropout, a variational approach that assigns distributions to the weights of a neural network instead of simply using fixed values.

Uncertainty Quantification for Traffic Forecasting: A Unified Approach

Deep Spatio-Temporal Uncertainty Quantification (DeepSTUQ), which can estimate both aleatoric and epistemic uncertainty and combines the merits of variational inference and deep ensembling by integrating the Monte Carlo dropout and the Adaptive Weight Averaging re-training methods.

ForecastNet: A Time-Variant Deep Feed-Forward Neural Network Architecture for Multi-Step-Ahead Time-Series Forecasting

It is argued that time-invariance can reduce the capacity to perform multi-step-ahead forecasting, where modelling the dynamics at a range of scales and resolutions is required.



Time-series Extreme Event Forecasting with Neural Networks at Uber

A novel endto-end recurrent neural network architecture is proposed that outperforms the current state of the art event forecasting methods on Uber data and generalizes well to a public M3 dataset used for time-series forecasting competitions.

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.

Long Short-Term Memory

A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.

Probabilistic Backpropagation for Scalable Learning of Bayesian Neural Networks

This work presents a novel scalable method for learning Bayesian neural networks, called probabilistic backpropagation (PBP), which works by computing a forward propagation of probabilities through the network and then doing a backward computation of gradients.

Bayesian Recurrent Neural Networks

This work shows that a simple adaptation of truncated backpropagation through time can yield good quality uncertainty estimates and superior regularisation at only a small extra computational cost during training, and demonstrates how a novel kind of posterior approximation yields further improvements to the performance of Bayesian RNNs.

A Theoretically Grounded Application of Dropout in Recurrent Neural Networks

This work applies a new variational inference based dropout technique in LSTM and GRU models, which outperforms existing techniques, and to the best of the knowledge improves on the single model state-of-the-art in language modelling with the Penn Treebank.

Nonlinear Systems Identification Using Deep Dynamic Neural Networks

It is demonstrated that deep neural networks are effective model estimators from input-output data and associated characteristics of the underlying dynamical systems.

Auto-Encoding Variational Bayes

A stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case is introduced.

Uncertainty in Deep Learning

This work develops tools to obtain practical uncertainty estimates in deep learning, casting recent deep learning tools as Bayesian models without changing either the models or the optimisation, and develops the theory for such tools.