The grammar of interactive explanatory model analysis
@article{Baniecki2020TheGO, title={The grammar of interactive explanatory model analysis}, author={Hubert Baniecki and P. Biecek}, journal={Data Mining and Knowledge Discovery}, year={2020}, pages={1 - 37} }
The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question. One cannot sufficiently explain a black-box machine learning model using a single method that gives only one perspective. Isolated explanations are prone to misunderstanding, leading to wrong or simplistic reasoning. This problem is known as the Rashomon effect and…
13 Citations
Do not explain without context: addressing the blind spot of model explanations
- Computer ScienceArXiv
- 2021
It is postulate that obtaining robust and useful explanations always requires supporting them with a broader context, and that many model explanations depend directly or indirectly on the choice of the referenced data distribution.
dalex: Responsible Machine Learning with Interactive Explainability and Fairness in Python
- Computer ScienceJ. Mach. Learn. Res.
- 2021
Dalex, a Python package which implements the model-agnostic interface for interactive model exploration, adopts the design crafted through the development of various tools for responsible machine learning; thus, it aims at the unification of the existing solutions.
Explanation as a Process: User-Centric Construction of Multi-level and Multi-modal Explanations
- Computer ScienceKI
- 2021
This work presents a process-based approach that combines multi-level and multi-modal explanations, and provides a proof-of-concept implementation for concepts induced from a semantic net about living beings.
A comprehensive taxonomy for explainable artificial intelligence: a systematic survey of surveys on methods and concepts
- Computer ScienceData Mining and Knowledge Discovery
- 2023
A complete taxonomy of XAI methods with respect to notions present in the current state of research is provided and provides foundations for targeted, use-case-oriented, and context-sensitive future research.
Interactive Slice Visualization for Exploring Machine Learning Models
- Computer ScienceJ. Comput. Graph. Stat.
- 2022
This work uses interactive visualization of slices of predictor space to address the interpretability deficit, opening up the black-box of machine learning algorithms, for the purpose of interrogating, explaining, validating and comparing model fits.
Assessing the representational accuracy of data-driven models: The case of the effect of urban green infrastructure on temperature
- Computer ScienceEnviron. Model. Softw.
- 2021
XAI Method Properties: A (Meta-)study
- Computer ScienceArXiv
- 2021
This paper summarizes the most cited and current taxonomies in a meta-analysis in order to highlight the essential aspects of the state-of-the-art in XAI.
Explainable Machine Learning applied to Single-Nucleotide Polymorphisms for Systemic Lupus Erythematosus Prediction
- Biology2020 11th International Conference on Information, Intelligence, Systems and Applications (IISA
- 2020
Approaches to exploration and explanation of machine learning models for quantifying the risk of an individual to SLE using single nucleotide polymorphism (SNP) as features are explored.
Responsible Prediction Making of COVID-19 Mortality (Student Abstract)
- Computer ScienceAAAI
- 2021
This paper shows how to advance the current state-of-the-art predictive models into the new responsible standards, by applying Interactive Explanatory Model Analysis (IEMA) implemented in modelStudio (Baniecki and Biecek 2019, 2020).
Explainable Artificial Intelligence Based Fault Diagnosis and Insight Harvesting for Steel Plates Manufacturing
- Computer ScienceArXiv
- 2020
For fault diagnosis of steel plates, a methodology of incorporating XAI based insights into the Data Science process of development of high precision classifier is reported on and a high precision fault diagnosis classifier has been developed.
References
SHOWING 1-10 OF 109 REFERENCES
One Explanation Does Not Fit All
- Computer ScienceKI - Künstliche Intelligenz
- 2020
This paper discusses the promises of Interactive Machine Learning for improved transparency of black-box systems using the example of contrastive explanations—a state-of-the-art approach to Interpretable Machine Learning and shows how to personalise counterfactual explanations by interactively adjusting their conditional statements and extract additional explanations by asking follow-up “What if?” questions.
From local explanations to global understanding with explainable AI for trees
- Computer ScienceNat. Mach. Intell.
- 2020
An explanation method for trees is presented that enables the computation of optimal local explanations for individual predictions, and the authors demonstrate their method on three medical datasets.
“Why Should I Trust You?”: Explaining the Predictions of Any Classifier
- Computer ScienceNAACL
- 2016
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.
DALEX: Explainers for Complex Predictive Models in R
- Computer ScienceJ. Mach. Learn. Res.
- 2018
A consistent collection of explainers for predictive models, a.k.a. black boxes, based on a uniform standardized grammar of model exploration which may be easily extended.
A Unified Approach to Interpreting Model Predictions
- Computer ScienceNIPS
- 2017
A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.
On cognitive preferences and the plausibility of rule-based models
- Computer ScienceMachine Learning
- 2019
It is argued that—all other things being equal—longer explanations may be more convincing than shorter ones, and that the predominant bias for shorter models may not be suitable when it comes to user acceptance of the learned models.
What Would You Ask the Machine Learning Model? Identification of User Needs for Model Explanations Based on Human-Model Conversations
- Computer SciencePKDD/ECML Workshops
- 2020
This is the first study which uses a conversational system to collect the needs of human operators from the interactive and iterative dialogue explorations of a predictive model.
One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques
- Computer ScienceArXiv
- 2019
This work introduces AI Explainability 360, an open-source software toolkit featuring eight diverse and state-of-the-art explainability methods and two evaluation metrics, and provides a taxonomy to help entities requiring explanations to navigate the space of explanation methods.
AI Explainability 360: An Extensible Toolkit for Understanding Data and Machine Learning Models
- Computer ScienceJ. Mach. Learn. Res.
- 2020
This work introduces AI Explainability 360, an open-source Python toolkit featuring ten diverse and state-of-the-art explainability methods and two evaluation metrics and provides a taxonomy to help entities requiring explanations to navigate the space of interpretation and explanation methods.
All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously
- Computer ScienceJ. Mach. Learn. Res.
- 2019
Model class reliance (MCR) is proposed as the range of VI values across all well-performing model in a prespecified class, which gives a more comprehensive description of importance by accounting for the fact that many prediction models, possibly of different parametric forms, may fit the data well.