Quantifying Predictive Uncertainty in Medical Image Analysis with Deep Kernel Learning

  title={Quantifying Predictive Uncertainty in Medical Image Analysis with Deep Kernel Learning},
  author={Zhiliang Wu and Yinchong Yang and Jindong Gu and Volker Tresp},
  journal={2021 IEEE 9th International Conference on Healthcare Informatics (ICHI)},
Deep neural networks are increasingly being used for the analysis of medical images. However, most works neglect the uncertainty in the model’s prediction. We propose an uncertainty-aware deep kernel learning model which permits the estimation of the uncertainty in the prediction by a pipeline of a Convolutional Neural Network and a sparse Gaussian Process. Furthermore, we adapt different pre-training methods to investigate their impacts on the proposed model. We apply our approach to Bone Age… 

Figures and Tables from this paper

Uncertainty-Aware Time-to-Event Prediction using Deep Kernel Accelerated Failure Time Models

This work proposes Deep Kernel Accelerated Failure Time models for the time-to-event prediction task, enabling uncertainty-awareness of the prediction by a pipeline of a recurrent neural network and a sparse Gaussian Process.



Performance of a Deep-Learning Neural Network Model in Assessing Skeletal Maturity on Pediatric Hand Radiographs.

A deep-learning convolutional neural network model can estimate skeletal maturity with accuracy similar to that of an expert radiologist and to that that of existing automated models.

DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning

Using DeepLesion, a universal lesion detector is trained that can find all types of lesions with one unified framework and achieves a sensitivity of 81.1% with five false positives per image.

Holistic and Comprehensive Annotation of Clinically Significant Findings on Diverse CT Images: Learning From Radiology Reports and Label Ontology

A lesion annotation network (LesaNet) based on a multilabel convolutional neural network (CNN) to learn all labels holistically, which can precisely annotate the lesions using an ontology of 171 fine-grained labels.

Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles

This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates.

Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning

A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.

Parametric Gaussian Process Regressors

In an extensive empirical comparison with a number of alternative methods for scalable GP regression, it is found that the resulting predictive distributions exhibit significantly better calibrated uncertainties and higher log likelihoods--often by as much as half a nat per datapoint.

What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?

A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.

Understanding Individual Decisions of CNNs via Contrastive Backpropagation

Contrastive Layer-wise Relevance Propagation is proposed, which is capable of producing instance-specific, class-discriminative, pixel-wise explanations and both qualitative and quantitative evaluations show that the CLRP generates better explanations than the LRP.

Opportunities and challenges in developing deep learning models using electronic health records data: a systematic review

A systematic review of deep learning models for electronic health record (EHR) data is conducted, and various deep learning architectures for analyzing different data sources and their target applications are illustrated.

Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks

This paper shows that GP hybrid deep networks, GPDNNs, (GPs on top of DNNs and trained end-to-end) inherit the nice properties of both GPs and DNNS and are much more robust to adversarial examples.