# Feature relevance quantification in explainable AI: A causality problem

@article{Janzing2019FeatureRQ, title={Feature relevance quantification in explainable AI: A causality problem}, author={Dominik Janzing and Lenon Minorics and Patrick Bl{\"o}baum}, journal={ArXiv}, year={2019}, volume={abs/1910.13413} }

We discuss promising recent contributions on quantifying feature relevance using Shapley values, where we observed some confusion on which probability distribution is the right one for dropped features. We argue that the confusion is based on not carefully distinguishing between observational and interventional conditional probabilities and try a clarification based on Pearl's seminal work on causality. We conclude that unconditional rather than conditional expectations provide the right notion…

## 150 Citations

### Counterfactual Shapley Additive Explanations

- Computer ScienceFAccT
- 2022

This work proposes a variant of SHAP, Counterfactual SHAP (CF-SHAP), that incorporates counterfactual information to produce a background dataset for use within the marginal (a.k.a. interventional) Shapley value framework.

### PredDiff: Explanations and Interactions from Conditional Expectations

- Computer ScienceArtif. Intell.
- 2022

### On Shapley Credit Allocation for Interpretability

- EconomicsArXiv
- 2020

This paper quantifies feature relevance by weaving different natures of interpretations together with different measures as characteristic functions for Shapley symmetrization by discussing measures of statistical uncertainty and dispersion as informative candidates, and their merits in generating explanations for each data point.

### COTENABILITY AND CAUSALITY: EXPLAINING FEATURE IMPORTANCE USING SHAPLEY VALUES

- Computer Science
- 2020

A graphical interpretation of Shapley values is presented which clarifies assumptions made during Shapley value calculations and extends Shapley feature importances so that both cotenability and causality are captured, ultimately increasing interpretability of these explanations.

### Towards Cotenable and Causal Shapley Feature Explanations

- Computer Science
- 2020

It is shown how different implementations of Shapley-based feature importances trade off these properties and proposed using medical domain knowledge to group features as a step towards satisfying both causality and cotenability, which would provide model explanations that are more useful in clinical settings.

### On Measuring Causal Contributions via do-interventions

- EconomicsICML
- 2022

Causal contributions measure the strengths of different causes to a target quantity. Understanding causal contributions is important in empirical sciences and data-driven disciplines since it allows…

### True to the Model or True to the Data?

- EconomicsArXiv
- 2020

It is argued that the choice comes down to whether it is desirable to be true to the model ortrue to the data, and how possible attributions are impacted by modeling choices.

### Quantifying intrinsic causal contributions via structure preserving interventions

- Computer Science
- 2020

This work proposes a new notion of causal contribution which describes the ’intrinsic’ part of the contribution of a node on a target node in a DAG, and proposes Shapley based symmetrization to get a measure that is invariant across arbitrary orderings of nodes.

### Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models

- Computer Science, EconomicsNeurIPS
- 2020

A novel framework for computing Shapley values that generalizes recent work that aims to circumvent the independence assumption is proposed and it is shown how these 'causal' Shapleyvalues can be derived for general causal graphs without sacrificing any of their desirable properties.

### Model Explanations via the Axiomatic Causal Lens

- Computer ScienceArXiv
- 2021

This work proposes three explanation measures which aggregate the set of all but-for causes — a necessary and sufficient explanation — into feature importance weights, and is the first to formally bridge the gap between model explanations, game-theoretic influence, and causal analysis.

## References

SHOWING 1-10 OF 28 REFERENCES

### Avoiding Discrimination through Causal Reasoning

- Computer ScienceNIPS
- 2017

This work crisply articulate why and when observational criteria fail, thus formalizing what was before a matter of opinion and put forward natural causal non-discrimination criteria and develop algorithms that satisfy them.

### Explaining individual predictions when features are dependent: More accurate approximations to Shapley values

- Computer ScienceArtif. Intell.
- 2021

### The many Shapley values for model explanation

- EconomicsICML
- 2020

The axiomatic approach is used to study the differences between some of the many operationalizations of the Shapley value for attribution, and a technique called Baseline Shapley (BShap) is proposed that is backed by a proper uniqueness result.

### Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems

- Computer Science2016 IEEE Symposium on Security and Privacy (SP)
- 2016

The transparency-privacy tradeoff is explored and it is proved that a number of useful transparency reports can be made differentially private with very little addition of noise.

### Consistent Individualized Feature Attribution for Tree Ensembles

- Computer ScienceArXiv
- 2018

This work develops fast exact tree solutions for SHAP (SHapley Additive exPlanation) values, which are the unique consistent and locally accurate attribution values, and proposes a rich visualization of individualized feature attributions that improves over classic attribution summaries and partial dependence plots, and a unique "supervised" clustering.

### Axiomatic Attribution for Deep Networks

- Computer ScienceICML
- 2017

We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms— Sensitivity and…

### “Why Should I Trust You?”: Explaining the Predictions of Any Classifier

- Computer ScienceNAACL
- 2016

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

### Neural Network Attributions: A Causal Perspective

- Computer ScienceICML
- 2019

A new attribution method for neural networks developed using first principles of causality is proposed, and algorithms to efficiently compute the causal effects, as well as scale the approach to data with large dimensionality are proposed.

### A Unified Approach to Interpreting Model Predictions

- Computer ScienceNIPS
- 2017

A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

### Fairness through awareness

- Computer ScienceITCS '12
- 2012

A framework for fair classification comprising a (hypothetical) task-specific metric for determining the degree to which individuals are similar with respect to the classification task at hand and an algorithm for maximizing utility subject to the fairness constraint, that similar individuals are treated similarly is presented.