• Corpus ID: 233301258

Testing for Outliers with Conformal p-values

@inproceedings{Bates2021TestingFO,
  title={Testing for Outliers with Conformal p-values},
  author={Stephen Bates and Emmanuel J. Cand{\`e}s and Lihua Lei and Yaniv Romano and Matteo Sesia},
  year={2021}
}
This paper studies the construction of p-values for nonparametric outlier detection, taking a multiple-testing perspective. The goal is to test whether new independent samples belong to the same distribution as a reference data set or are outliers. We propose a solution based on conformal inference, a broadly applicable framework which yields p-values that are marginally valid but mutually dependent for different test points. We prove these p-values are positively dependent and enable exact… 

Figures and Tables from this paper

Conformal prediction beyond exchangeability
TLDR
These algorithms are provably robust, with substantially less loss of coverage when exchangeability is violated due to distribution drift or other challenging features of real data, while also achieving the same coverage guarantees as existing conformal prediction methods if the data points are in fact exchangeable.
Semi-supervised multiple testing
TLDR
This work considers a null distribution-free approach for multiple testing in the following semi-supervised setting, with a focus on the false discovery rate (FDR) control and the Benjamini-Hochberg (BH) procedure, and presents theoretical results that handle this framework.
Conformal Prediction Sets with Limited False Positives
TLDR
This work develops a new approach to multi-label conformal prediction in which it aims to output a precise set of promising prediction candidates with a bounded number of incorrect answers, and demonstrates the effectiveness of this approach across a number of classification tasks in natural language processing, computer vision, and computational chemistry.
Sensitivity Analysis of Individual Treatment Effects: A Robust Conformal Inference Approach
TLDR
A model-free framework for sensitivity analysis of individual treatment effects (ITEs), building upon ideas from conformal inference, and proves a sharpness result showing that for certain classes of prediction problems, the prediction intervals cannot possibly be tightened.
Test for non-negligible adverse shifts
TLDR
This work proposes a framework to detect adverse shifts based on outlier scores, D-SOS, which is uniquely tailored to serve as a robust metric for model monitoring and data validation.
Learn then Test: Calibrating Predictive Algorithms to Achieve Risk Control
TLDR
The main insight is to reframe the risk-control problem as multiple hypothesis testing, enabling techniques and mathematical arguments to be reframe from those in the previous literature.
A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification
TLDR
This hands-on introduction is aimed at a reader interested in the practical implementation of distribution-free UQ who is not necessarily a statistician, allowing them to rigorously quantify algorithmic uncertainty with one self-contained document.
Conformal prediction for the design problem
TLDR
This work introduces a method to quantify predictive uncertainty in such settings by constructing confidence sets for predictions that account for the dependence between the training and test data.
Conformalized Frequency Estimation from Sketched Data
TLDR
This paper explicitly demonstrates the use of the proposed conformal inference method in combination with the famous count-min sketch algorithm and a non-linear variation thereof to facilitate the exposition.
Conformal histogram regression
TLDR
A conformal method to compute prediction intervals for nonparametric regression that can automatically adapt to skewed data and have marginal coverage in finite samples, while asymptotically achieving conditional coverage and optimal length if the black-box model is consistent.
...
...

References

SHOWING 1-10 OF 111 REFERENCES
Combining P-Values Via Averaging
This paper proposes general methods for the problem of multiple testing of a single hypothesis, with a standard goal of combining a number of $p$-values without making any assumptions about their
Prediction and outlier detection in classification problems
  • Leying Guan, R. Tibshirani
  • Computer Science
    Journal of the Royal Statistical Society. Series B, Statistical Methodology
  • 2022
TLDR
This work considers the multi‐class classification problem when the training data and the out‐of‐sample test data may have different distributions and proposes a method called BCOPS (balanced and conformal optimized prediction sets), which tries to optimize the out-of-sample performance and estimates the outlier detection rate of a given procedure.
A comparison of some conformal quantile regression methods
We compare two recent methods that combine conformal inference with quantile regression to produce locally adaptive and marginally valid prediction intervals under sample exchangeability (Romano,
Multivariate Outlier Detection With High-Breakdown Estimators
In this paper we develop multivariate outlier tests based on the high-breakdown Minimum Covariance Determinant estimator. The rules that we propose have good performance under the null hypothesis of
Classification Accuracy as a Proxy for Two Sample Testing
TLDR
This work proves two results that hold for all classifiers in any dimensions: if its true error remains $\epsilon-better than chance for some $\epSilon>0$ as $d,n \to \infty$, then (a) the permutation-based test is consistent (has power approaching to one), and (b) a computationally efficient test based on a Gaussian approximation of the null distribution is also consistent.
Distribution-free conditional predictive bands using density estimators
TLDR
Two conformal methods based on conditional density estimators that do not depend on this type of assumption to obtain asymptotic conditional coverage are introduced: Dist-split and CD-split.
On the evaluation of unsupervised outlier detection: measures, datasets, and an empirical study
TLDR
An extensive experimental study on the performance of a representative set of standard k nearest neighborhood-based methods for unsupervised outlier detection, across a wide variety of datasets prepared for this purpose, and provides a characterization of the datasets themselves.
A Distribution-Free Test of Covariate Shift Using Conformal Prediction
TLDR
This is the first successful attempt of using conformal prediction for testing statistical hypotheses and can be effectively combined with existing classification algorithms to find good conformity score functions.
Classification with Valid and Adaptive Coverage
TLDR
A novel conformity score is developed, which is explicitly demonstrate to be powerful and intuitive for classification problems, but whose underlying principle is potentially far more general.
Uncertainty Sets for Image Classifiers using Conformal Prediction
TLDR
An algorithm is presented that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%, which provides a formal finite-sample coverage guarantee for every model and dataset.
...
...