Towards safe deep learning: accurately quantifying biomarker uncertainty in neural network predictions

@article{EatonRosen2018TowardsSD,
  title={Towards safe deep learning: accurately quantifying biomarker uncertainty in neural network predictions},
  author={Zach Eaton-Rosen and Felix J. S. Bragman and Sotirios Bisdas and S{\'e}bastien Ourselin and M. Jorge Cardoso},
  journal={ArXiv},
  year={2018},
  volume={abs/1806.08640}
}
Automated medical image segmentation, specifically using deep learning, has shown outstanding performance in semantic segmentation tasks. [] Key Method When applied to a tumour volume estimation application, we demonstrate that by using such modelling of uncertainty, deep learning systems can be made to report volume estimates with well-calibrated error-bars, making them safer for clinical use. We also show that the uncertainty estimates extrapolate to unseen data, and that the confidence intervals are…
Assessing Reliability and Challenges of Uncertainty Estimations for Medical Image Segmentation
TLDR
Evaluating common voxel-wise uncertainty measures for deep learning found auxiliary networks to be a valid alternative to common uncertainty methods since they can be applied to any previously trained segmentation model and the reliability of uncertainty estimates is compromised.
J ul 2 01 9 Assessing Reliability and Challenges of Uncertainty Estimations for Medical Image Segmentation
TLDR
Evaluating common voxel-wise uncertainty measures for deep learning found auxiliary networks to be a valid alternative to common uncertainty methods since they can be applied to any previously trained segmentation model.
On the Relationship Between Calibrated Predictors and Unbiased Volume Estimation
TLDR
It is concluded that having a calibrated predictor is a sufficient, but not necessary condition for obtaining an unbiased estimate of the volume, and that convex combinations of calibrated classifiers preserve volume estimation, but do not preserve calibration.
Uncertainty Quantification in Deep Learning for Safer Neuroimage Enhancement
TLDR
Methods to characterise different components of uncertainty in medical image enhancement problems and demonstrate the ideas using diffusion MRI super-resolution to highlight three key benefits of uncertainty modelling for improving the safety of DL-based image enhancement systems.
Quality control for more reliable integration of deep learning-based image segmentation into medical workflows
TLDR
This work reveals how QC methods can help to detect failed segmentation cases and therefore make automatic segmentation more reliable and suitable for clinical practice.
Quantification of Uncertainty in Brain Tumor Segmentation using Generative Network and Bayesian Active Learning
TLDR
This paper uses the generative adversarial network to handle limited labeled images and introduces supervised acquisition functions based on distance functions between ground-truth and predicted images to quantify segmentation uncertainty.
Uncertainty-Aware Training of Neural Networks for Selective Medical Image Segmentation
TLDR
A novel method is presented that considers such uncertainty in the training process to maximize the accuracy on the confident subset rather than the Accuracy on the whole dataset.
Epistemic and aleatoric uncertainties reduction with rotation variation for medical image segmentation with ConvNets
TLDR
Experiments on segmentation of computed tomography images demonstrate that overconfident incorrect predictions are reduced through uncertainty reduction and that the method outperforms prediction baselines based on epistemic and aleatoric estimation.
Bayesian Neural Networks for Uncertainty Estimation of Imaging Biomarkers
TLDR
This work proposes to propagate segmentation uncertainty to the statistical analysis to account for variations in segmentation confidence, and evaluates four Bayesian neural networks to sample from the posterior distribution and estimate the uncertainty.
...
...

References

SHOWING 1-9 OF 9 REFERENCES
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
TLDR
A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task
TLDR
This work investigates efficient and flexible elements of modern convolutional networks such as dilated convolution and residual connection, and proposes a high-resolution, compact Convolutional network for volumetric image segmentation.
Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation
TLDR
This paper explores Ensembles of Multiple Models and Architectures (EMMA) for robust performance through aggregation of predictions from a wide range of methods to reduce the influence of the meta-parameters of individual models and the risk of overfitting the configuration to a particular database.
NiftyNet: a deep-learning platform for medical imaging
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)
TLDR
The set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences are reported, finding that different algorithms worked best for different sub-regions, but that no single algorithm ranked in the top for all sub-Regions simultaneously.
Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features
TLDR
This set of labels and features should enable direct utilization of the TCGA/TCIA glioma collections towards repeatable, reproducible and comparative quantitative studies leading to new predictive, prognostic, and diagnostic assessments, as well as performance evaluation of computer-aided segmentation methods.
Dropout: a simple way to prevent neural networks from overfitting
TLDR
It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.