• Corpus ID: 225094241

Shapley Flow: A Graph-based Approach to Interpreting Model Predictions

  title={Shapley Flow: A Graph-based Approach to Interpreting Model Predictions},
  author={Jiaxuan Wang and Jenna Wiens and Scott M. Lundberg},
Many existing approaches for estimating feature importance are problematic because they ignore or hide dependencies among features. A causal graph, which encodes the relationships among input variables, can aid in assigning feature importance. However, current approaches that assign credit to nodes in the causal graph fail to explain the entire graph. In light of these limitations, we propose Shapley Flow, a novel approach to interpreting machine learning models. It considers the entire causal… 

Quantifying intrinsic causal contributions via structure preserving interventions

This work proposes a new notion of causal contribution which describes the ’intrinsic’ part of the contribution of a node on a target node in a DAG, and proposes Shapley based symmetrization to get a measure that is invariant across arbitrary orderings of nodes.

Marginal Contribution Feature Importance - an Axiomatic Approach for Explaining Data

A set of axioms to capture properties expected from a feature importance score when explaining data are developed and it is proved that there exists only one score that satisfies all of them, the Marginal Contribution Feature Importance (MCI).

WeightedSHAP: analyzing and improving Shapley based feature attributions

Shapley value is a popular approach for measuring the influence of individual features. While Shapley feature attribution is built upon desiderata from game theory, some of its constraints may be less

Industrial Data Science for Batch Manufacturing Processes

Batch processes show several sources of variability, from raw materials’ properties to initial and evolving conditions that change during the different events in the manufacturing process. In this

Edinburgh Research Explorer The Shapley Value in Machine Learning

An overview of the most important applications of the Shapley value in machine learning: feature selection, explainability, multi-agent reinforcement learning, ensemble pruning, and data valuation.


  • 2022

Comparing Baseline Shapley and Integrated Gradients for Local Explanation: Some Additional Insights

Simulation studies are used to examine the differences when neural networks with ReLU activation function is used to fit the models and tuned the hyperparameters using cross validation.

Algorithms to estimate Shapley value feature attributions

This work describes the multiple types of Shapley value feature attributions and methods to calculate each one and describes two distinct families of approaches: model-agnostic and model-specific approximations.

On Measuring Causal Contributions via do-interventions

Causal contributions measure the strengths of different causes to a target quantity. Understanding causal contributions is important in empirical sciences and data-driven disciplines since it allows

Scientific Inference With Interpretable Machine Learning: Analyzing Models to Learn About Real-World Phenomena

A phenomenon-centric approach to IML in science clarifies the opportunities and limitations of IML for inference; that conditional not marginal sampling is required; and, the conditions under which the authors can trust IML methods.



Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability

Asymmetric Shapley values can improve model explanations by incorporating causal information, provide an unambiguous test for unfair discrimination in model predictions, enable sequentially incremental explanations in time-series models, and support feature-selection studies without the need for model retraining.

Feature relevance quantification in explainable AI: A causality problem

It is concluded that unconditional rather than conditional expectations provide the right notion of dropping features in contradiction to the theoretical justification of the software package SHAP.

The many Shapley values for model explanation

The axiomatic approach is used to study the differences between some of the many operationalizations of the Shapley value for attribution, and a technique called Baseline Shapley (BShap) is proposed that is backed by a proper uniqueness result.


IT is a rather depressing task for a Catholic to write a technical philosophical essay, for it is improbable that the philosophers whom he criticises wi l l read it . F r Hawkins therefore deserves

A Value for n-person Games

Introduction At the foundation of the theory of games is the assumption that the players of a game can evaluate, in their utility scales, every “prospect” that might arise as a result of a play. In

On the relationship between Shapley and Owen values

The Shapley value is obtained as an average of Owen values over each set of the same kind of coalition structures, i.e., coalition structures with equal number of sets sharing the same size.

Axiomatic Attribution for Deep Networks

We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms— Sensitivity and

Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems

The transparency-privacy tradeoff is explored and it is proved that a number of useful transparency reports can be made differentially private with very little addition of noise.

Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models

A novel framework for computing Shapley values that generalizes recent work that aims to circumvent the independence assumption is proposed and it is shown how these 'causal' Shapleyvalues can be derived for general causal graphs without sacrificing any of their desirable properties.