• Corpus ID: 239049526

Towards Reducing Aleatoric Uncertainty for Medical Imaging Tasks

@article{Sambyal2021TowardsRA,
  title={Towards Reducing Aleatoric Uncertainty for Medical Imaging Tasks},
  author={Abhishek Singh Sambyal and N. C. Krishnan and Deepti R. Bathula},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.11012}
}
In safety-critical applications like medical diagnosis, certainty associated with a model’s prediction is just as important as its accuracy. Consequently, uncertainty estimation and reduction play a crucial role. Uncertainty in predictions can be attributed to noise or randomness in data (aleatoric) and incorrect model inferences (epistemic). While model uncertainty can be reduced with more data or bigger models, aleatoric uncertainty is more intricate. This work proposes a novel approach that… 

Figures and Tables from this paper

References

SHOWING 1-8 OF 8 REFERENCES
Data Augmentation for Brain-Tumor Segmentation: A Review
TLDR
The current advances in data-augmentation techniques applied to magnetic resonance images of brain tumors are reviewed and the most promising research directions to follow are highlighted in order to synthesize high-quality artificial brain-tumor examples which can boost the generalization abilities of deep models.
What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?
TLDR
A Bayesian deep learning framework combining input-dependent aleatoric uncertainty together with epistemic uncertainty is presented, which makes the loss more robust to noisy data, also giving new state-of-the-art results on segmentation and depth regression benchmarks.
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
TLDR
This work proposes an alternative to Bayesian NNs that is simple to implement, readily parallelizable, requires very little hyperparameter tuning, and yields high quality predictive uncertainty estimates.
A Bayesian Neural Net to Segment Images with Uncertainty Estimates and Good Calibration
We propose a novel Bayesian decision theoretic deep-neural-network (DNN) framework for image segmentation, enabling us to define a principled measure of uncertainty associated with label
Bayesian SegNet: Model Uncertainty in Deep Convolutional Encoder-Decoder Architectures for Scene Understanding
TLDR
A practical system which is able to predict pixel-wise class labels with a measure of model uncertainty, and shows that modelling uncertainty improves segmentation performance by 2-3% across a number of state of the art architectures such as SegNet, FCN and Dilation Network, with no additional parametrisation.
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning
TLDR
A new theoretical framework is developed casting dropout training in deep neural networks (NNs) as approximate Bayesian inference in deep Gaussian processes, which mitigates the problem of representing uncertainty in deep learning without sacrificing either computational complexity or test accuracy.
Weight Uncertainty in Neural Network
TLDR
This work introduces a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop, and shows how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems.
Uncertainty modelling in deep learning for safer neuroimage enhancement: Demonstration in diffusion MRI
TLDR
Methods to characterise different components of uncertainty, and demonstrate the ideas using diffusion MRI super-resolution, are introduced, to account for intrinsic uncertainty through a heteroscedastic noise model and for parameter uncertainty through approximate Bayesian inference.