A Practical Guide on Explainable Ai Techniques Applied on Biomedical Use Case Applications
@article{Bennetot2021APG, title={A Practical Guide on Explainable Ai Techniques Applied on Biomedical Use Case Applications}, author={Adrien Bennetot and Ivan Donadello and Ayoub El Qadi and Mauro Dragoni and Thomas Frossard and Benedikt Wagner and Anna Saranti and Silvia Tulli and Maria Trocan and Raja Chatila and Andreas Holzinger and Artur S. d'Avila Garcez and Natalia D'iaz-Rodr'iguez}, journal={SSRN Electronic Journal}, year={2021} }
Last years have been characterized by an upsurge of opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although they have great generalization and prediction skills, their functioning does not allow obtaining detailed explanations of their behaviour. As opaque machine learning models are increasingly being employed to make important predictions in critical environments, the danger is to create and use decisions that are not justifiable or legitimate. Therefore, there…
Figures and Tables from this paper
References
SHOWING 1-10 OF 52 REFERENCES
Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI
- Computer ScienceInf. Fusion
- 2021
Explainable Deep Image Classifiers for Skin Lesion Diagnosis
- Computer ScienceArXiv
- 2021
A case study on skin lesion images where an existing XAI approach is customized for explaining a deep learning model able to recognize different types of skin lesions and it is revealed that some of the most frequent skin lesions classes are distinctly separated.
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
- Computer ScienceInf. Fusion
- 2020
Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation
- Computer ScienceArXiv
- 2016
This short paper summarizes a recent technique introduced by Bach et al. that explains predictions by decomposing the classification decision of DNN models in terms of input variables.
A Unified Approach to Interpreting Model Predictions
- Computer ScienceNIPS
- 2017
A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.
Interactive machine learning for health informatics: when do we need the human-in-the-loop?
- Computer ScienceBrain Informatics
- 2016
Interactive machine learning (iML) is defined as “algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human.”
Network Module Detection from Multi-Modal Node Features with a Greedy Decision Forest for Actionable Explainable AI
- Computer ScienceArXiv
- 2021
This work demonstrates subnetwork detection based on multi-modal node features using a novel Greedy Decision Forest with inher-ent interpretability, a crucial factor to retain experts and gain their trust in such algorithms.
Explaining nonlinear classification decisions with deep Taylor decomposition
- Computer SciencePattern Recognit.
- 2017
Measurable Counterfactual Local Explanations for Any Classifier
- Computer ScienceECAI
- 2020
A novel method for explaining the predictions of any classifier by using regression to generate local explanations and a definition of fidelity to the underlying classifier for local explanation models which is based on distances to a target decision boundary is introduced.
Persuasive Explanation of Reasoning Inferences on Dietary Data
- Computer SciencePROFILES/SEMEX@ISWC
- 2019
Results prove that the persuasive explanations are able to reduce the unhealthy users’ behaviours.