• Corpus ID: 226237214

Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models

@article{Heskes2020CausalSV,
  title={Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models},
  author={Tom M. Heskes and Evi Sijben and Ioan Gabriel Bucur and Tom Claassen},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.01625}
}
Shapley values underlie one of the most popular model-agnostic methods within explainable artificial intelligence. These values are designed to attribute the difference between a model's prediction and an average baseline to the different features used as input to the model. Being based on solid game-theoretic principles, Shapley values uniquely satisfy several desirable properties, which is why they are increasingly used to explain the predictions of possibly complex and highly non-linear… 

Figures and Tables from this paper

From Shapley Values to Generalized Additive Models and back

In explainable machine learning, local post-hoc explanation algorithms and inherently interpretable models are often seen as competing approaches. In this work, offer a novel perspective on Shapley

Accurate and robust Shapley Values for explaining predictions and focusing on local important variables

TLDR
The concept of "Same Decision Probability" (SDP) that evaluates the robustness of a prediction when some variables are missing is used and produces sparse additive explanations easier to visualize and analyse.

Shapley Flow: A Graph-based Approach to Interpreting Model Predictions

TLDR
Shapley Flow is a novel approach to interpreting machine learning models that considers the entire causal graph, and assigns credit to edges instead of treating nodes as the fundamental unit of credit assignment, and enables users to understand the flow of importance through a system, and reason about potential interventions.

Rational Shapley Values

TLDR
Rational Shapley values is introduced, a novel XAI method that synthesizes and extends these seemingly incompatible approaches in a rigorous, flexible manner and compares favorably to state of the art XAI tools in a range of quantitative and qualitative comparisons.

Accurate Shapley Values for explaining tree-based models

TLDR
This work reminds an invariance principle for SV and derives the correct approach for computing the SV of categorical variables that are particularly sensitive to the encoding used and introduces two estimators of Shapley Values that exploit the tree structure efficiently and are more accurate than state-of-the-art methods.

Algorithms to estimate Shapley value feature attributions

TLDR
This work describes the multiple types of Shapley value feature attributions and methods to calculate each one and describes two distinct families of approaches: model-agnostic and model-specific approximations.

Using Shapley Values and Variational Autoencoders to Explain Predictive Models with Dependent Mixed Features

TLDR
This paper uses a variational autoencoder with arbitrary conditioning (VAEAC) to model all feature dependencies simultaneously and demonstrates that this approach to Shapley value estimation outperforms the state-of-the-art methods for a wide range of settings for both continuous and mixed dependent features.

Causal versus Marginal Shapley Values for Robotic Lever Manipulation Controlled using Deep Reinforcement Learning

TLDR
It is shown that enabling an explanation method to account for indirect effects and incorporating some application knowledge can lead to explanations that better agree with human intuition.

Explaining Preferences with Shapley Values

TLDR
This paper proposes PREF-SHAP, a Shapley value-based model explanation framework for pairwise comparison data, which derives the appropriate value functions for preference models and extends the framework to model and explain context specific information, such as the surface type in a tennis game.

The Shapley Value of coalition of variables provides better explanations

TLDR
A Python library1 that computes reliably conditional expectations and SV for tree-based models, is implemented and compared with state-of-the-art algorithms on toy models and real data sets.

References

SHOWING 1-10 OF 35 REFERENCES

Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

TLDR
Asymmetric Shapley values can improve model explanations by incorporating causal information, provide an unambiguous test for unfair discrimination in model predictions, enable sequentially incremental explanations in time-series models, and support feature-selection studies without the need for model retraining.

Problems with Shapley-value-based explanations as feature importance measures

TLDR
It is shown that mathematical problems arise when Shapley values are used for feature importance and that the solutions to mitigate these necessarily induce further complexity, such as the need for causal reasoning.

The Explanation Game: Explaining Machine Learning Models with Cooperative Game Theory

TLDR
This work illustrates how subtle differences in the underlying game formulations of existing methods can cause large differences in attribution for a prediction, and proposes a general framework for generating explanations for ML models, called formulate, approximate, and explain (FAE).

Feature relevance quantification in explainable AI: A causality problem

TLDR
It is concluded that unconditional rather than conditional expectations provide the right notion of dropping features in contradiction to the theoretical justification of the software package SHAP.

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

Consistent Individualized Feature Attribution for Tree Ensembles

TLDR
This work develops fast exact tree solutions for SHAP (SHapley Additive exPlanation) values, which are the unique consistent and locally accurate attribution values, and proposes a rich visualization of individualized feature attributions that improves over classic attribution summaries and partial dependence plots, and a unique "supervised" clustering.

A Unified Approach to Interpreting Model Predictions

TLDR
A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

Explaining Explanations in AI

TLDR
This work contrasts the different schools of thought on what makes an explanation in philosophy and sociology, and suggests that machine learning might benefit from viewing the problem more broadly.

A Value for n-person Games

Introduction At the foundation of the theory of games is the assumption that the players of a game can evaluate, in their utility scales, every “prospect” that might arise as a result of a play. In