Interpretability Aware Model Training to Improve Robustness against Out-of-Distribution Magnetic Resonance Images in Alzheimer's Disease Classification
@article{Kuijs2021InterpretabilityAM, title={Interpretability Aware Model Training to Improve Robustness against Out-of-Distribution Magnetic Resonance Images in Alzheimer's Disease Classification}, author={Merel Kuijs and Catherine R. Jutzeler and Bastian Rieck and Sarah Catharina Br{\"u}ningk}, journal={ArXiv}, year={2021}, volume={abs/2111.08701} }
Owing to its pristine soft-tissue contrast and high resolution, structural magnetic resonance imaging (MRI) is widely applied in neurology, making it a valuable data source for imagebased machine learning (ML) and deep learning applications. The physical nature of MRI acquisition and reconstruction, however, causes variations in image intensity, resolution, and signal-to-noise ratio. Since ML models are sensitive to such variations, performance on out-of-distribution data, which is inherent to…
Figures from this paper
One Citation
Out-of-Distribution (OOD) Detection Based on Deep Learning: A Review
- Computer ScienceElectronics
- 2022
The latest applications of OOD detection based on deep learning and the problems and expectations in this field are presented and categorize methods according to the training data.
References
SHOWING 1-10 OF 10 REFERENCES
The reliability of a deep learning model in clinical out-of-distribution MRI data: a multicohort study
- MedicineMedical Image Anal.
- 2020
Improving the Generalizability of Convolutional Neural Network-Based Segmentation on CMR Images
- Computer Science, MedicineFrontiers in Cardiovascular Medicine
- 2020
It is demonstrated that a neural network trained on a single-site single-scanner dataset from the UK Biobank can be successfully applied to segmenting cardiac MR images across different sites and different scanners without substantial loss of accuracy.
Multi-Source Domain Adaptation via Optimal Transport for Brain Dementia Identification
- Computer Science2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI)
- 2021
A multi-source optimal transport framework for cross-domain Alzheimer’s disease (AD) diagnosis with multi-site MRI data is proposed, with results suggesting the superiority of MSOT over several state-of-the-art methods.
MR signal intensity: staying on the bright side in MR image interpretation
- MedicineRMD Open
- 2018
From this viewpoint, the different subjective choices that can be made to generate MR images are summarized and the importance of communication between radiologists and rheumatologists to correctly interpret images is stressed.
A Road Map for Translational Research on Artificial Intelligence in Medical Imaging: From the 2018 National Institutes of Health/RSNA/ACR/The Academy Workshop.
- MedicineJournal of the American College of Radiology : JACR
- 2019
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
- Computer ScienceAAAI
- 2018
The authors' elastic-net attacks to DNNs (EAD) feature L1-oriented adversarial examples and include the state-of-the-art L2 attack as a special case, suggesting novel insights on leveraging L1 distortion in adversarial machine learning and security implications ofDNNs.
The impact of field strength on image quality in MRI
- MedicineJournal of magnetic resonance imaging : JMRI
- 1996
The physical principles of the field strength dependence of MRI in relation to image quality are reviewed and diagnostic equivalence between these two field strengths in at least two common clinical disease categories (MS and internal derangement of the knee) is demonstrated.
Interpretable Machine Learning in Healthcare
- Computer Science, Medicine2018 IEEE International Conference on Healthcare Informatics (ICHI)
- 2018
The landscape of recent advances to address the challenges model interpretability in healthcare and also how one would go about choosing the right interpretable machine learnig algorithm for a given problem in healthcare are explored.
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image.
Machine learning interpretability
- A survey on methods and metrics. Electronics,
- 2019