An Efficient Explanation of Individual Classifications using Game Theory

@article{trumbelj2010AnEE,
  title={An Efficient Explanation of Individual Classifications using Game Theory},
  author={Erik {\vS}trumbelj and Igor Kononenko},
  journal={J. Mach. Learn. Res.},
  year={2010},
  volume={11},
  pages={1-18}
}
We present a general method for explaining individual predictions of classification models. The method is based on fundamental concepts from coalitional game theory and predictions are explained with contributions of individual feature values. We overcome the method's initial exponential time complexity with a sampling-based approximation. In the experimental part of the paper we use the developed method on models generated by several well-known machine learning algorithms on both synthetic and… 

Figures and Tables from this paper

The Explanation Game: Explaining Machine Learning Models with Cooperative Game Theory

This work illustrates how subtle differences in the underlying game formulations of existing methods can cause large differences in attribution for a prediction, and proposes a general framework for generating explanations for ML models, called formulate, approximate, and explain (FAE).

The Explanation Game: Explaining Machine Learning Models Using Shapley Values

This work illustrates how subtle differences in the underlying game formulations of existing methods can cause large Differences in the attributions for a prediction, and presents a general game formulation that unifies existing methods, and enables straightforward confidence intervals on their attributions.

Explaining prediction models and individual predictions with feature contributions

A sensitivity analysis-based method for explaining prediction models that can be applied to any type of classification or regression model, and which is equivalent to commonly used additive model-specific methods when explaining an additive model.

COOPERATIVE GAME THEORY FOR MACHINE LEARNING TASKS Kooperativní teorie her pro úlohy strojového učení

A new way of interpreting categorical variables built upon axioms of coalitional game theory is proposed and shown on a counterexample why the current way leads to wrong results.

The Shapley Value of coalition of variables provides better explanations

A Python library1 that computes reliably conditional expectations and SV for tree-based models, is implemented and compared with state-of-the-art algorithms on toy models and real data sets.

A Game Theoretic Approach to Class-wise Selective Rationalization

This work proposes a new game theoretic approach to class-dependent rationalization, where the method is specifically trained to highlight evidence supporting alternative conclusions and is able to identify both factual and counterfactual rationales consistent with human rationalization.

Coalitional Strategies for Efficient Individual Prediction Explanation

Methods based on the detection of relevant groups of attributes influencing a prediction - named coalitions - are provided and compares them with the literature and show that these coalitional methods are more efficient than existing ones such as SHapley Additive exPlanation.

An exploration of the influence of path choice in game-theoretic attribution algorithms

It is argued that the multiple paths employed by interventional Shapley extend away from the training data manifold and are therefore more likely to pass through regions where the model has little support, and therefore advocate the straight-line path since it will almost always pass closer to the data manifold.

Explainable Artificial Intelligence: How Subsets of the Training Data Affect a Prediction

This paper considers data-driven models which are already developed, implemented and trained and proposes a novel methodology which is called Shapley values for training data subset importance, arguing that the explanations enable us to perceive more of the inner workings of the algorithms, and illustrate how models producing similar predictions can be based on very different parts of the training data.

Accurate Shapley Values for explaining tree-based models

This work reminds an invariance principle for SV and derives the correct approach for computing the SV of categorical variables that are particularly sensitive to the encoding used and introduces two estimators of Shapley Values that exploit the tree structure efficiently and are more accurate than state-of-the-art methods.
...

References

SHOWING 1-10 OF 30 REFERENCES

Feature Selection via Coalitional Game Theory

Empirical comparison with several other existing feature selection methods shows that the backward elimination variant of CSA leads to the most accurate classification results on an array of data sets.

Explaining Classifications For Individual Instances

It is demonstrated that the generated explanations closely follow the learned models and a visualization technique is presented that shows the utility of the approach and enables the comparison of different prediction methods.

Polynomial calculation of the Shapley value based on sampling

Fair Attribution of Functional Contribution in Artificial and Biological Networks

The multi-perturbation Shapley value analysis, an axiomatic, scalable, and rigorous method for deducing causal function localization from multiple perturbations data, accurately quantifies the contributions of network elements and their interactions.

Visual Explanation of Evidence with Additive Classifiers

A framework, ExplainD, is described for explaining decisions made by classifiers that use additive evidence, which applies to many widely used classifiers, including linear discriminants and many additive models.

Contact personalization using a score understanding method

This paper presents a method to interpret the output of a classification (or regression) model based on two concepts: the variable importance and the value importance of the variable.

Wrappers for Feature Subset Selection

Random Forests

Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.

Transversality of the Shapley value

The paper by S. Moretti and F. Patrone is a remarkable survey on the use of the Shapley value in many different domains (so different that one could also entitle the paper “Versatility of the Shapley