Corpus ID: 224713632

Conformal prediction interval for dynamic time-series

@inproceedings{Xu2021ConformalPI,
  title={Conformal prediction interval for dynamic time-series},
  author={Chen Xu and Yao Xie},
  booktitle={ICML},
  year={2021}
}
  • C. Xu, Yao Xie
  • Published in ICML 2021
  • Computer Science, Mathematics
We develop a method to build distribution-free prediction intervals for time-series based on conformal inference, called \Verb|EnPI| that wraps around any ensemble estimator to construct sequential prediction intervals. \Verb|EnPI| is closely related to the conformal prediction (CP) framework but does not require data exchangeability. Theoretically, these intervals attain finite-sample, approximately valid average coverage for broad classes of regression functions and time-series with strongly… Expand
Conformal Anomaly Detection on Spatio-Temporal Observations with Missing Data
We develop a distribution-free, unsupervised anomaly detection method called ECAD, which wraps around any regression algorithm and sequentially detects anomalies. Rooted in conformal prediction, ECADExpand
Distribution-Free Prediction Bands for Multivariate Functional Time Series: an Application to the Italian Gas Market
Uncertainty quantification in forecasting represents a topic of great importance in statistics, especially when dealing with complex data characterized by non-trivial dependence structure. Pushed byExpand
Root-finding Approaches for Computing Conformal Prediction Set
TLDR
This work investigates how this approach to conformal prediction can overcome many limitations of formerly used strategies and discusses its complexity and drawbacks. Expand

References

SHOWING 1-10 OF 61 REFERENCES
Conformal Prediction with Neural Networks
TLDR
A modification of the original CP method, called inductive conformal prediction (ICP), is used, which allows for a neural network confidence predictor without the massive computational overhead of CP. Expand
No free lunch theorems for optimization
A framework is developed to explore the connection between effective optimization algorithms and the problems they are solving. A number of "no free lunch" (NFL) theorems are presented whichExpand
Generalization error of ensemble estimators
  • N. Ueda, R. Nakano
  • Computer Science
  • Proceedings of International Conference on Neural Networks (ICNN'96)
  • 1996
TLDR
An analytical result is presented for the generalization error of ensemble estimators by using factors of interest (bias, variance, covariance, and noise variance) and showing how thegeneralization error is affected by each of them. Expand
The limits of distribution-free conditional predictive inference
We consider the problem of distribution-free predictive inference, with the goal of producing predictive coverage guarantees that hold conditionally rather than marginally. Existing methods such asExpand
Improved Rates and Asymptotic Normality for Nonparametric Neural Network Estimators
We obtain an improved approximation rate (in Sobolev norm) of r/sup -1/2-/spl alpha//(d+1)/ for a large class of single hidden layer feedforward artificial neural networks (ANN) with r hidden unitsExpand
Conformal Anomaly Detection on Spatio-Temporal Observations with Missing Data
We develop a distribution-free, unsupervised anomaly detection method called ECAD, which wraps around any regression algorithm and sequentially detects anomalies. Rooted in conformal prediction, ECADExpand
Conformal histogram regression
This paper develops a conformal method to compute prediction intervals for nonparametric regression that can automatically adapt to skewed data. Leveraging black-box machine learning algorithms toExpand
Uncertainty Sets for Image Classifiers using Conformal Prediction
TLDR
An algorithm is presented that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%, which provides a formal finite-sample coverage guarantee for every model and dataset. Expand
A Comprehensive Analysis of Deep Regression
TLDR
The results reinforce the hypothesis according to which, in general, a general-purpose network adequately tuned can yield results close to the state-of-the-art without having to resort to more complex and ad-hoc regression models. Expand
Adaptive, Distribution-Free Prediction Intervals for Deep Networks
TLDR
A neural network is proposed that outputs three values instead of a single point estimate and optimizes a loss function motivated by the standard quantile regression loss and provides two prediction interval methods with finite sample coverage guarantees solely under the assumption that the observations are independent and identically distributed. Expand
...
1
2
3
4
5
...