Improving Trustworthiness of AI Disease Severity Rating in Medical Imaging with Ordinal Conformal Prediction Sets

@inproceedings{Lu2022ImprovingTO,
  title={Improving Trustworthiness of AI Disease Severity Rating in Medical Imaging with Ordinal Conformal Prediction Sets},
  author={Charles Lu and Anastasios Nikolas Angelopoulos and Stuart R. Pomerantz},
  booktitle={MICCAI},
  year={2022}
}
The regulatory approval and broad clinical deployment of medical AI have been hampered by the perception that deep learning models fail in unpredictable and possibly catastrophic ways. A lack of statistically rigorous uncertainty quantification is a significant factor undermining trust in AI results. Recent developments in distribution-free uncertainty quantification present practical solutions for these issues by providing reliability guarantees for black-box models on arbitrary data… 

Figures and Tables from this paper

Trustworthy clinical AI solutions: a unified review of uncertainty quantification in deep learning models for medical image analysis

An overview of the existing methods to quantify uncertainty associated to DL predictions is proposed, which focuses on applications to medical image analysis, which present specific challenges due to the high dimensionality of images and their quality variability, as well as constraints associated to real-life clinical routine.

References

SHOWING 1-10 OF 33 REFERENCES

Fair Conformal Predictors for Applications in Medical Imaging

This paper explores how conformal predictions can complement existing deep learning approaches by providing an intuitive way of expressing uncertainty while facilitating greater transparency to clinical users and empirically evaluates several methods of conformal predictors on a dermatology photography dataset for skin lesion classification.

A Gentle Introduction to Conformal Prediction and Distribution-Free Uncertainty Quantification

This hands-on introduction is aimed at a reader interested in the practical implementation of distribution-free UQ who is not necessarily a statistician, allowing them to rigorously quantify algorithmic uncertainty with one self-contained document.

Three Applications of Conformal Prediction for Rating Breast Density in Mammography

The results show the potential of distribution-free uncertainty quantification techniques to enhance trust on AI algorithms and expedite their translation to usage as well as three possible applications of conformal prediction applied to medical imaging tasks.

The need for uncertainty quantification in machine-assisted medical decision making

It is time to develop methods for systematically quantifying uncertainty underlying deep learning processes, which would lead to increased confidence in practical applicability of these approaches.

Conformal Prediction in Clinical Medical Sciences

The literature reviewed shows that CP methods can be used in clinical applications to provide important insight into the accuracy of individual predictions, but most of the studies have been performed in isolation, without input from practicing clinicians, not providing comparisons among different approaches and not considering important socio-technical considerations leading to clinical adoption.

Uncertainty Sets for Image Classifiers using Conformal Prediction

An algorithm is presented that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%, which provides a formal finite-sample coverage guarantee for every model and dataset.

Distribution-Free, Risk-Controlling Prediction Sets

This work shows how to generate set-valued predictions from a black-box predictor that controls the expected loss on future test points at a user-specified level, and provides explicit finite-sample guarantees for any dataset by using a holdout set to calibrate the size of the prediction sets.

Artificial Intelligence Sepsis Prediction Algorithm Learns to Say "I don't know"

CompOSER (COnformal Multidimensional Prediction Of SEpsis Risk), a deep learning model for the early prediction of sepsis, specifically designed to reduce false alarms by detecting unfamiliar patients/situations arising from erroneous data, missingness, distributional shift and data drifts is presented.

Private Prediction Sets

This work develops a method that takes any pre-trained predictive model and outputs differentially private prediction sets, and follows the general approach of split conformal prediction; it uses holdout data to calibrate the size of the prediction sets but preserves privacy by using a privatized quantile subroutine.

Nested conformal prediction and quantile out-of-bag ensemble methods