Algorithms to estimate Shapley value feature attributions
@article{Chen2022AlgorithmsTE, title={Algorithms to estimate Shapley value feature attributions}, author={Hugh Chen and Ian Covert and Scott M. Lundberg and Su-In Lee}, journal={ArXiv}, year={2022}, volume={abs/2207.07605} }
Feature attributions based on the Shapley value are popular for explaining machine learning models; however, their estimation is complex from both a theoretical and computational standpoint. We disentangle this complexity into two factors: (1) the approach to removing feature information, and (2) the tractable estimation strategy. These two factors provide a natural lens through which we can better understand and compare 24 distinct algorithms. Based on the various feature removal approaches…
Figures and Tables from this paper
8 Citations
Feature Importance: A Closer Look at Shapley Values and LOCO
- Economics
- 2023
There is much interest lately in explainability in statistics and machine learning. One aspect of explainability is to quantify the importance of various features (or covariates). Two popular methods…
On marginal feature attributions of tree-based models
- Computer ScienceArXiv
- 2023
It is proved that their marginal Shapley values, or more generally marginal feature attributions obtained from a linear game value, are simple (piecewise-constant) functions with respect to a certain finite partition of the input space determined by the trained model.
SHAP-IQ: Unified Approximation of any-order Shapley Interactions
- Computer ScienceArXiv
- 2023
This work proposes SHAPley Interaction Quantification (SHAP-IQ), an efficient sampling-based approximator to compute Shapley interactions for all three definitions, as well as all other that satisfy the linearity, symmetry and dummy axiom.
Approximating the Shapley Value without Marginal Contributions
- Computer Science, EconomicsArXiv
- 2023
This paper proposes with SVARM and Stratified SVARM two parameter-free and domain-independent approximation algorithms based on a representation of the Shapley value detached from the notion of marginal contributions that prove unmatched theoretical guarantees regarding their approximation quality and provide satisfying empirical results.
Learning to Estimate Shapley Values with Vision Transformers
- Computer ScienceArXiv
- 2022
This work uses an attention masking approach to evaluate ViTs with partial information, and develops a procedure to generate Shapley value explanations via a separate, learned explainer model, which provides more accurate explanations than existing methods for ViTs.
Explanation Shift: Investigating Interactions between Models and Shifting Data Distributions
- Computer ScienceArXiv
- 2023
It is found that the modeling of explanation shifts can be a better indicator for detecting out-of-distribution model behaviour than state- of-the-art techniques.
Explanation Shift: Detecting distribution shifts on tabular data via the explanation space
- Computer ScienceArXiv
- 2022
It is found that the modeling of explanation shifts can be a better indicator for the detection of predictive performance changes than state-of-the-art techniques based on repre-sentations of distribution shifts.
Approximation of group explainers with coalition structure using Monte Carlo sampling on the product space of coalitions and features
- Computer Science
- 2023
A novel Monte Carlo sampling algorithm that estimates a wide class of linear game values, as well as coalitional values, for the marginal game based on a given ML model and predictor vector at a reduced complexity that depends linearly on the size of the background dataset.
References
SHOWING 1-10 OF 93 REFERENCES
The Explanation Game: Explaining Machine Learning Models Using Shapley Values
- Computer ScienceCD-MAKE
- 2020
This work illustrates how subtle differences in the underlying game formulations of existing methods can cause large Differences in the attributions for a prediction, and presents a general game formulation that unifies existing methods, and enables straightforward confidence intervals on their attributions.
The many Shapley values for model explanation
- EconomicsICML
- 2020
The axiomatic approach is used to study the differences between some of the many operationalizations of the Shapley value for attribution, and a technique called Baseline Shapley (BShap) is proposed that is backed by a proper uniqueness result.
L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
- Computer ScienceICLR
- 2019
Two algorithms with linear complexity for instancewise feature importance scoring are developed and the relationship of their methods to the Shapley value and another closely related concept known as the Myerson value from cooperative game theory is established.
Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
- Computer ScienceArtif. Intell.
- 2021
Problems with Shapley-value-based explanations as feature importance measures
- EconomicsICML
- 2020
It is shown that mathematical problems arise when Shapley values are used for feature importance and that the solutions to mitigate these necessarily induce further complexity, such as the need for causal reasoning.
Sampling Permutations for Shapley Value Estimation
- Computer ScienceJ. Mach. Learn. Res.
- 2022
This work investigates new approaches based on two classes of approximation methods and compares them empirically, demonstrating quadrature techniques in a RKHS containing functions of permutations and exploiting connections between the hypersphere S d − 2 and permutations to create practical algorithms for generating permutation samples with good properties.
A Multilinear Sampling Algorithm to Estimate Shapley Values
- Computer Science, Economics2020 25th International Conference on Pattern Recognition (ICPR)
- 2021
This work proposes a new sampling method based on a multilinear extension technique as applied in game theory that is applicable to any machine learning model, in particular for either multiclass classifications or regression problems.
Causal Shapley Values: Exploiting Causal Knowledge to Explain Individual Predictions of Complex Models
- Computer Science, EconomicsNeurIPS
- 2020
A novel framework for computing Shapley values that generalizes recent work that aims to circumvent the independence assumption is proposed and it is shown how these 'causal' Shapleyvalues can be derived for general causal graphs without sacrificing any of their desirable properties.
Shapley Flow: A Graph-based Approach to Interpreting Model Predictions
- Computer ScienceAISTATS
- 2021
Shapley Flow is a novel approach to interpreting machine learning models that considers the entire causal graph, and assigns credit to edges instead of treating nodes as the fundamental unit of credit assignment, and enables users to understand the flow of importance through a system, and reason about potential interventions.
Attention Flows are Shapley Value Explanations
- Computer ScienceACL
- 2021
It is argued that NLP practitioners should, when possible, adopt attention flow explanations alongside more traditional ones, because attention flows are indeed Shapley Values, at at the layerwise level.