• Corpus ID: 227253750

Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression

@inproceedings{Covert2021ImprovingKP,
  title={Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression},
  author={Ian Covert and Su-In Lee},
  booktitle={AISTATS},
  year={2021}
}
The Shapley value solution concept from cooperative game theory has become popular for interpreting ML models, but efficiently estimating Shapley values remains challenging, particularly in the model-agnostic setting. We revisit the idea of estimating Shapley values via linear regression to understand and improve upon this approach. By analyzing KernelSHAP alongside a newly proposed unbiased estimator, we develop techniques to detect its convergence and calculate uncertainty estimates. We also… 

CoAI: Cost-Aware Artificial Intelligence for Health Care

TLDR
It is shown that CoAI dramatically reduces the cost of predicting prehospital acute traumatic coagulopathy, intensive care mortality, and outpatient mortality relative to existing risk scores, while improving prediction accuracy, and outperforms existing state-of-the-art cost-sensitive prediction approaches in terms of predictive performance, model cost, and training time.

Rethinking Explainability as a Dialogue: A Practitioner's Perspective

TLDR
A set of five principles researchers should follow when designing interactive explanations are outlined as a starting place for future work and it is shown why natural language dialogues satisfy these principles and are a desirable way to build interactive explanations.

Statistical Aspects of SHAP: Functional ANOVA for Model Interpretation

TLDR
This paper studies the algorithm used to estimate SHAP scores and shows that it is a transformation of the functional ANOVA decomposition, and argues that the connection between machine learning explainability and sensitivity analysis is illuminating in this case.

SHAFF: Fast and consistent SHApley eFfect estimates via random Forests

TLDR
SHAFF is introduced, a fast and accurate Shapley effect estimate, even when input variables are dependent, and is shown to show SHAFF efficiency through both a theoretical analysis of its consistency, and the practical performance improvements over competitors with extensive experiments.

Algorithms to estimate Shapley value feature attributions

TLDR
This work describes the multiple types of Shapley value feature attributions and methods to calculate each one and describes two distinct families of approaches: model-agnostic and model-specific approximations.

FastSHAP: Real-Time Shapley Value Estimation

TLDR
FastSHAP is introduced, a method for estimating Shapley values in a single forward pass using a learned explainer model that amortizes the cost of explaining many inputs via a learning approach inspired by the Shapley value’s weighted least squares characterization.

Explaining by Removing: A Unified Framework for Model Explanation

TLDR
A new class of methods, removal-based explanations, that are based on the principle of simulating feature removal to quantify each feature's influence are established, and a unified framework is developed that helps practitioners better understand model explanation tools.

Interpretable (not just posthoc-explainable) medical claims modeling for discharge placement to prevent avoidable all-cause readmissions or death

TLDR
An inherently interpretable multilevel Bayesian modeling framework inspired by the piecewise linearity of ReLUactivated deep neural networks is developed, and demonstrated how the blackbox posthoc explainer tool SHAP generates explanations that are not supported by the fitted model – and if taken at face value does not offer enough context to make a model actionable.

Learning to Estimate Shapley Values with Vision Transformers

TLDR
This work uses an attention masking approach to evaluate ViTs with partial information and develops a procedure for generating Shapley value explanations via a separate, learned explainer model, finding that this approach provides more accurate explanations than any existing method for ViTs.

WeightedSHAP: analyzing and improving Shapley based feature attributions

Shapley value is a popular approach for measuring the influence of individual features. While Shapley feature attribution is built upon desiderata from game theory, some of its constraints may be less

References

SHOWING 1-10 OF 44 REFERENCES

Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation

TLDR
This work proposes a novel, polynomial-time approximation of Shapley values in deep neural networks, and shows that this method produces significantly better approximations of Shapleys values than existing state-of-the-art attribution methods.

Bounding the Estimation Error of Sampling-based Shapley Value Approximation With/Without Stratifying

TLDR
Non-asymptotic bounds on the estimation error are provided for two cases: where theVariance, and the range, of the players' marginal contributions is known, and it is shown that when the range is significantly large relative to the Shapley value, the bound can be improved.

The Explanation Game: Explaining Machine Learning Models Using Shapley Values

TLDR
This work illustrates how subtle differences in the underlying game formulations of existing methods can cause large Differences in the attributions for a prediction, and presents a general game formulation that unifies existing methods, and enables straightforward confidence intervals on their attributions.

L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data

TLDR
Two algorithms with linear complexity for instancewise feature importance scoring are developed and the relationship of their methods to the Shapley value and another closely related concept known as the Myerson value from cooperative game theory is established.

Risk Attribution Using the Shapley Value: Methodology and Policy Applications

We present the Shapley Value as a methodology for risk attribution and use it to derive measures of banks’ systemic importance. The methodology possesses attractive properties, such as fairness and

Analysis of regression in game theory approach

Working with multiple regression analysis a researcher usually wants to know a comparative importance of predictors in the model. However, the analysis can be made difficult because of

Sobol' Indices and Shapley Value

  • A. Owen
  • Economics
    SIAM/ASA J. Uncertain. Quantification
  • 2014
TLDR
The Shapley value of individual variables when the authors take “variance explained” as their combined value does not match either of the usual Sobol' indices, but is bracketed between them for variance explained or any totally monotone game.

Extremal Principle Solutions of Games in Characteristic Function Form: Core, Chebychev and Shapley Value Generalizations

In 1966, W. Lucas [1] exhibited a 10 person game with no von Neumann-Morgenstern solution. D. Schmeidler [2] then originated the nucleolus, proved it exists for every game, is unique and is contained

Prior Solutions: Extensions of Convex Nucleus Solutions to Chance-Constrained Games.

Abstract : The theory of n-person cooperative game in characteristic function form is extended to games with stochastic characteristic function, where the values that the characteristic function

Shapley Effects for Global Sensitivity Analysis: Theory and Computation

TLDR
Owen proposed an alternative sensitivity measure, based on the concept of the Shapley value in game theory, and showed it always sums to the correct total variance if inputs are independent, and it is analyzed, which is called Owen's measure, in the case of dependent inputs.