Towards Reducing Aleatoric Uncertainty for Medical Imaging Tasks

  title={Towards Reducing Aleatoric Uncertainty for Medical Imaging Tasks},
  author={Abhishek Singh Sambyal and N. C. Krishnan and Deepti R. Bathula},
  journal={2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI)},
In safety-critical applications like medical diagnosis, certainty associated with a model’s prediction is just as important as its accuracy. Consequently, uncertainty estimation and reduction play a crucial role. Uncertainty in predictions can be attributed to noise or randomness in data (aleatoric) and incorrect model inferences (epistemic). While model uncertainty can be reduced with more data or bigger models, aleatoric uncertainty is more intricate. This work proposes a novel approach that… 
1 Citations

Figures and Tables from this paper

Uncertainty-Based Rejection in Machine Learning: Implications for Model Development and Interpretability
This work focused on applying UQ into practice, closing the gap of its utility in the ML pipeline and giving insights into how UQ is used to improve model development and its interpretability.


Data Augmentation for Brain-Tumor Segmentation: A Review
The current advances in data-augmentation techniques applied to magnetic resonance images of brain tumors are reviewed and the most promising research directions to follow are highlighted in order to synthesize high-quality artificial brain-tumor examples which can boost the generalization abilities of deep models.
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates.
A Bayesian Neural Net to Segment Images with Uncertainty Estimates and Good Calibration
We propose a novel Bayesian decision theoretic deep-neural-network (DNN) framework for image segmentation, enabling us to define a principled measure of uncertainty associated with label
Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding
A practical system which is able to predict pixel-wise class labels with a measure of model uncertainty, and shows that modelling uncertainty improves segmentation performance by 2-3% across a number of state of the art architectures such as SegNet, FCN and Dilation Network, with no additional parametrisation.
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Weight Uncertainty in Neural Network
This work introduces a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop, and shows how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems.