Corpus ID: 202660574

The Explanation Game: Explaining Machine Learning Models with Cooperative Game Theory

@article{Merrick2019TheEG,
  title={The Explanation Game: Explaining Machine Learning Models with Cooperative Game Theory},
  author={Luke Merrick and Ankur Taly},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.08128}
}
  • Luke Merrick, Ankur Taly
  • Published 2019
  • Mathematics, Computer Science
  • ArXiv
  • A number of techniques have been proposed to explain a machine learning (ML) model's prediction by attributing it to the corresponding input features. Popular among these are techniques that apply the Shapley value method from cooperative game theory. While existing papers focus on the axiomatic motivation of Shapley values, and efficient techniques for computing them, they neither justify the game formulations used nor address the uncertainty implicit in their methods' outputs. For instance… CONTINUE READING
    11 Citations

    Figures, Tables, and Topics from this paper

    An exploration of the influence of path choice in game-theoretic attribution algorithms
    • PDF
    Problems with Shapley-value-based explanations as feature importance measures
    • 18
    • Highly Influenced
    • PDF
    Explaining the data or explaining a model? Shapley values that uncover non-linear dependencies
    • 2
    • PDF
    Explaining by Removing: A Unified Framework for Model Explanation
    • 1
    • Highly Influenced
    • PDF
    Principles and Practice of Explainable Machine Learning
    • 2
    • PDF
    Machine learning interpretability through contribution-value plots
    Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI
    • 4

    References

    SHOWING 1-10 OF 38 REFERENCES
    The many Shapley values for model explanation
    • 47
    • PDF
    L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
    • 54
    • PDF
    Bounding the Estimation Error of Sampling-based Shapley Value Approximation With/Without Stratifying
    • 50
    • PDF
    Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation
    • 50
    • PDF
    An Efficient Explanation of Individual Classifications using Game Theory
    • 217
    • Highly Influential
    • PDF
    "Why Should I Trust You?": Explaining the Predictions of Any Classifier
    • 3,853
    • PDF
    A Unified Approach to Interpreting Model Predictions
    • 1,861
    • Highly Influential
    • PDF
    Consistent Individualized Feature Attribution for Tree Ensembles
    • 259
    • Highly Influential
    • PDF
    Monotonic solutions of cooperative games
    • 619