Corpus ID: 235795437

One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images

  title={One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images},
  author={Weina Jin and Xiaoxiao Li and G. Hamarneh},
Being able to explain the prediction to clinical end-users is a necessity to leverage the power of AI models for clinical decision support. For medical images, saliency maps are the most common form of explanation. The maps highlight important features for AI model’s prediction. Although many saliency map methods have been proposed, it is unknown how well they perform on explaining decisions on multi-modal medical images, where each modality/channel carries distinct clinical meanings of the… Expand

Figures and Tables from this paper


Sanity Checks for Saliency Maps
It is shown that some existing saliency methods are independent both of the model and of the data generating process, and methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model. Expand
InfoMask: Masked Variational Latent Representation to Localize Chest Disease
This paper proposes a learned spatial masking mechanism to filter out irrelevant background signals from attention maps and results in more accurate localization of discriminatory regions. Expand
Explainable Deep Learning Models in Medical Image Analysis
A review of the current applications of explainable deep learning for different medical imaging tasks is presented here. Expand
Network Dissection: Quantifying Interpretability of Deep Visual Representations
This work uses the proposed Network Dissection method to test the hypothesis that interpretability is an axis-independent property of the representation space, then applies the method to compare the latent representations of various networks when trained to solve different classification problems. Expand
Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning
A diagnostic tool based on a deep-learning framework for the screening of patients with common treatable blinding retinal diseases, which demonstrates performance comparable to that of human experts in classifying age-related macular degeneration and diabetic macular edema. Expand
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Concept Activation Vectors (CAVs) are introduced, which provide an interpretation of a neural net's internal state in terms of human-friendly concepts, and may be used to explore hypotheses and generate insights for a standard image classification network as well as a medical application. Expand
"Hello AI": Uncovering the Onboarding Needs of Medical Practitioners for Human-AI Collaborative Decision-Making
This work investigates the key types of information medical experts desire when they are first introduced to a diagnostic AI assistant, providing a richer understanding of what experts find important in their introduction to AI assistants before integrating them into routine practice. Expand
Seven-Point Checklist and Skin Lesion Classification Using Multitask Multimodal Neural Nets
We propose a multitask deep convolutional neural network, trained on multimodal data (clinical and dermoscopic images, and patient metadata), to classify the 7-point melanoma checklist criteria andExpand
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)
The set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences are reported, finding that different algorithms worked best for different sub-regions, but that no single algorithm ranked in the top for all sub-Regions simultaneously. Expand
Fast and accurate view classification of echocardiograms using deep learning
A machine-learning technique is used to teach a computer to recognize different types of video and still images produced by echocardiogram tests, and it is shown that the model could correctly classify what heart anatomy was shown in videos with 98% accuracy. Expand