The grammar of interactive explanatory model analysis

@article{Baniecki2020TheGO,
  title={The grammar of interactive explanatory model analysis},
  author={Hubert Baniecki and P. Biecek},
  journal={Data Mining and Knowledge Discovery},
  year={2020},
  pages={1 - 37}
}
The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, leading to wrong or simplistic reasoning. This problem is known as the Rashomon effect and… 

Do not explain without context: addressing the blind spot of model explanations

It is postulate that obtaining robust and useful explanations always requires supporting them with a broader context, and that many model explanations depend directly or indirectly on the choice of the referenced data distribution.

dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python

Dalex, a Python package which implements the model-agnostic interface for interactive model exploration, adopts the design crafted through the development of various tools for responsible machine learning; thus, it aims at the unification of the existing solutions.

Explanation as a Process: User-Centric Construction of Multi-level and Multi-modal Explanations

This work presents a process-based approach that combines multi-level and multi-modal explanations, and provides a proof-of-concept implementation for concepts induced from a semantic net about living beings.

A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts

A complete taxonomy of XAI methods with respect to notions present in the current state of research is provided and provides foundations for targeted, use-case-oriented, and context-sensitive future research.

Interactive Slice Visualization for Exploring Machine Learning Models

This work uses interactive visualization of slices of predictor space to address the interpretability deficit, opening up the black-box of machine learning algorithms, for the purpose of interrogating, explaining, validating and comparing model fits.

XAI Method Properties: A (Meta-)study

This paper summarizes the most cited and current taxonomies in a meta-analysis in order to highlight the essential aspects of the state-of-the-art in XAI.

Explainable Machine Learning applied to Single-Nucleotide Polymorphisms for Systemic Lupus Erythematosus Prediction

Approaches to exploration and explanation of machine learning models for quantifying the risk of an individual to SLE using single nucleotide polymorphism (SNP) as features are explored.

Responsible Prediction Making of COVID-19 Mortality (Student Abstract)

This paper shows how to advance the current state-of-the-art predictive models into the new responsible standards, by applying Interactive Explanatory Model Analysis (IEMA) implemented in modelStudio (Baniecki and Biecek 2019, 2020).

Explainable Artificial Intelligence Based Fault Diagnosis and Insight Harvesting for Steel Plates Manufacturing

For fault diagnosis of steel plates, a methodology of incorporating XAI based insights into the Data Science process of development of high precision classifier is reported on and a high precision fault diagnosis classifier has been developed.

References

SHOWING 1-10 OF 109 REFERENCES

One Explanation Does Not Fit All

This paper discusses the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations—a state-of-the-art approach to Interpretable Machine Learning and shows how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up “What if?” questions.

From local explanations to global understanding with explainable AI for trees

An explanation method for trees is presented that enables the computation of optimal local explanations for individual predictions, and the authors demonstrate their method on three medical datasets.

“Why Should I Trust You?”: Explaining the Predictions of Any Classifier

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

DALEX: Explainers for Complex Predictive Models in R

  • P. Biecek
  • Computer Science
    J. Mach. Learn. Res.
  • 2018
A consistent collection of explainers for predictive models, a.k.a. black boxes, based on a uniform standardized grammar of model exploration which may be easily extended.

A Unified Approach to Interpreting Model Predictions

A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

On cognitive preferences and the plausibility of rule-based models

It is argued that—all other things being equal—longer explanations may be more convincing than shorter ones, and that the predominant bias for shorter models may not be suitable when it comes to user acceptance of the learned models.

What Would You Ask the Machine Learning Model? Identification of User Needs for Model Explanations Based on Human-Model Conversations

This is the first study which uses a conversational system to collect the needs of human operators from the interactive and iterative dialogue explorations of a predictive model.

One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques

This work introduces AI Explainability 360, an open-source software toolkit featuring eight diverse and state-of-the-art explainability methods and two evaluation metrics, and provides a taxonomy to help entities requiring explanations to navigate the space of explanation methods.

AI Explainability 360: An Extensible Toolkit for Understanding Data and Machine Learning Models

This work introduces AI Explainability 360, an open-source Python toolkit featuring ten diverse and state-of-the-art explainability methods and two evaluation metrics and provides a taxonomy to help entities requiring explanations to navigate the space of interpretation and explanation methods.

All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously

Model class reliance (MCR) is proposed as the range of VI values across all well-performing model in a prespecified class, which gives a more comprehensive description of importance by accounting for the fact that many prediction models, possibly of different parametric forms, may fit the data well.
...