A Practical Guide on Explainable Ai Techniques Applied on Biomedical Use Case Applications

@article{Bennetot2021APG,
  title={A Practical Guide on Explainable Ai Techniques Applied on Biomedical Use Case Applications},
  author={Adrien Bennetot and Ivan Donadello and Ayoub El Qadi and Mauro Dragoni and Thomas Frossard and Benedikt Wagner and Anna Saranti and Silvia Tulli and Maria Trocan and Raja Chatila and Andreas Holzinger and Artur S. d'Avila Garcez and Natalia D'iaz-Rodr'iguez},
  journal={SSRN Electronic Journal},
  year={2021}
}
Last years have been characterized by an upsurge of opaque automatic decision support systems, such as Deep Neural Networks (DNNs). Although they have great generalization and prediction skills, their functioning does not allow obtaining detailed explanations of their behaviour. As opaque machine learning models are increasingly being employed to make important predictions in critical environments, the danger is to create and use decisions that are not justifiable or legitimate. Therefore, there… 

References

SHOWING 1-10 OF 52 REFERENCES

Explainable Deep Image Classifiers for Skin Lesion Diagnosis

A case study on skin lesion images where an existing XAI approach is customized for explaining a deep learning model able to recognize different types of skin lesions and it is revealed that some of the most frequent skin lesions classes are distinctly separated.

Interpreting the Predictions of Complex ML Models by Layer-wise Relevance Propagation

This short paper summarizes a recent technique introduced by Bach et al. that explains predictions by decomposing the classification decision of DNN models in terms of input variables.

A Unified Approach to Interpreting Model Predictions

A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

Interactive machine learning for health informatics: when do we need the human-in-the-loop?

  • Andreas Holzinger
  • Computer Science
    Brain Informatics
  • 2016
Interactive machine learning (iML) is defined as “algorithms that can interact with agents and can optimize their learning behavior through these interactions, where the agents can also be human.”

Network Module Detection from Multi-Modal Node Features with a Greedy Decision Forest for Actionable Explainable AI

This work demonstrates subnetwork detection based on multi-modal node features using a novel Greedy Decision Forest with inher-ent interpretability, a crucial factor to retain experts and gain their trust in such algorithms.

Measurable Counterfactual Local Explanations for Any Classifier

A novel method for explaining the predictions of any classifier by using regression to generate local explanations and a definition of fidelity to the underlying classifier for local explanation models which is based on distances to a target decision boundary is introduced.

Persuasive Explanation of Reasoning Inferences on Dietary Data

Results prove that the persuasive explanations are able to reduce the unhealthy users’ behaviours.
...