• Corpus ID: 244270290

Interpretability Aware Model Training to Improve Robustness against Out-of-Distribution Magnetic Resonance Images in Alzheimer's Disease Classification

  title={Interpretability Aware Model Training to Improve Robustness against Out-of-Distribution Magnetic Resonance Images in Alzheimer's Disease Classification},
  author={Merel Kuijs and Catherine R. Jutzeler and Bastian Rieck and Sarah Catharina Br{\"u}ningk},
Owing to its pristine soft-tissue contrast and high resolution, structural magnetic resonance imaging (MRI) is widely applied in neurology, making it a valuable data source for imagebased machine learning (ML) and deep learning applications. The physical nature of MRI acquisition and reconstruction, however, causes variations in image intensity, resolution, and signal-to-noise ratio. Since ML models are sensitive to such variations, performance on out-of-distribution data, which is inherent to… 
1 Citations

Figures from this paper

Out-of-Distribution (OOD) Detection Based on Deep Learning: A Review

The latest applications of OOD detection based on deep learning and the problems and expectations in this field are presented and categorize methods according to the training data.



Improving the Generalizability of Convolutional Neural Network-Based Segmentation on CMR Images

It is demonstrated that a neural network trained on a single-site single-scanner dataset from the UK Biobank can be successfully applied to segmenting cardiac MR images across different sites and different scanners without substantial loss of accuracy.

Multi-Source Domain Adaptation via Optimal Transport for Brain Dementia Identification

A multi-source optimal transport framework for cross-domain Alzheimer’s disease (AD) diagnosis with multi-site MRI data is proposed, with results suggesting the superiority of MSOT over several state-of-the-art methods.

MR signal intensity: staying on the bright side in MR image interpretation

From this viewpoint, the different subjective choices that can be made to generate MR images are summarized and the importance of communication between radiologists and rheumatologists to correctly interpret images is stressed.

EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples

The authors' elastic-net attacks to DNNs (EAD) feature L1-oriented adversarial examples and include the state-of-the-art L2 attack as a special case, suggesting novel insights on leveraging L1 distortion in adversarial machine learning and security implications ofDNNs.

The impact of field strength on image quality in MRI

The physical principles of the field strength dependence of MRI in relation to image quality are reviewed and diagnostic equivalence between these two field strengths in at least two common clinical disease categories (MS and internal derangement of the knee) is demonstrated.

Interpretable Machine Learning in Healthcare

The landscape of recent advances to address the challenges model interpretability in healthcare and also how one would go about choosing the right interpretable machine learnig algorithm for a given problem in healthcare are explored.

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization

This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image.

Machine learning interpretability

  • A survey on methods and metrics. Electronics,
  • 2019