From local explanations to global understanding with explainable AI for trees

@article{Lundberg2020FromLE,
  title={From local explanations to global understanding with explainable AI for trees},
  author={Scott M. Lundberg and Gabriel G. Erion and Hugh Chen and Alex J. DeGrave and Jordan M Prutkin and Bala G. Nair and Ronit Katz and Jonathan Himmelfarb and Nisha Bansal and Su-In Lee},
  journal={Nature Machine Intelligence},
  year={2020},
  volume={2},
  pages={56-67}
}
Tree-based machine learning models such as random forests, decision trees and gradient boosted trees are popular nonlinear predictive models, yet comparatively little attention has been paid to explaining their predictions. Here we improve the interpretability of tree-based models through three main contributions. (1) A polynomial time algorithm to compute optimal explanations based on game theory. (2) A new type of explanation that directly measures local feature interaction effects. (3) A new… 

Data-driven advice for interpreting local and global model predictions in bioinformatics problems

A thorough comparison of the explanations computed by both CFCs and SHapley Additive exPlanation on a set of 164 publicly available classification problems is contributed in order to provide data-driven algorithm recommendations to current researchers.

Local Interpretable Model Agnostic Shap Explanations for machine learning models

This proposed ML explanation technique uses Shapley values under the LIME paradigm to achieve the following: explain prediction of any model by using a locally faithful and interpretable decision tree model on which the Tree Explainer is used to calculate the shapley values and give visually interpretable explanations.

GAM Forest Explanation

This work proposes a post hoc explanation method of large forests, named GAM-based Explanation of Forests (GEF), which builds a Generalized Additive Model (GAM) able to explain, both locally and globally, the impact on the predictions of a limited set of features and feature interactions.

Optimal Local Explainer Aggregation for Interpretable Prediction

A local explainer aggregation method which selects local explainers using non-convex optimization and uses an integer optimization framework to combineLocal explainers into a near-global aggregate explainer, which improves on fidelity over existing global explainer methods.

Evaluating Local Model-Agnostic Explanations of Learning to Rank Models with Decision Paths

This work proposes to focus on tree-based LTR models, from which the ground truth feature importance scores can be extracted using decision paths, and compares two recently proposed explanation techniques when using decision trees and gradient boosting models on the MQ2008 dataset.

Global Explanation of Tree-Ensembles Models Based on Item Response Theory

This research proposes a measure called Explainable based on Item Response theory - eXirt, which is capable of explaining tree-ensemble models by using the properties of Item Response Theory (IRT), and demonstrates that the advocated methodology generates global explanations of tree-ensingmble models that have not yet been explored.

Comparing Explanation Methods for Traditional Machine Learning Models Part 1: An Overview of Current Methods and Quantifying Their Disagreement

The atmospheric science community is made aware of recently developed explainability methods for traditional ML models and their use in a software package developed by the authors (scikit-explain) is demonstrated and visualize.

Interpretable Local Concept-based Explanation with Human Feedback to Predict All-cause Mortality

Concept-based Local Explanations with Feedback (CLEF), a novel local model agnostic explanation framework for learning a set of high-level transparent concept definitions in high-dimensional tabular data that uses clinician-labeled concepts rather than raw features, is proposed.

Global explanations with decision rules: a co-learning approach

This paper introduces the soft truncated Gaussian mixture analysis (STruGMA), a probabilistic model which encapsulates hyper-rectangle decision rules and proposes a co-learning framework to learn decision rules as explanations of black-box models through knowledge distillation and simultaneously constrain the black- box model by these explanations.

Consistent Sufficient Explanations and Minimal Local Rules for explaining regression and classification models

This work introduces an accurate and fast estimator of the conditional probability of maintaining the same prediction via random Forests for any data and shows its efficiency through a theoretical analysis of its consistency.
...

References

SHOWING 1-10 OF 46 REFERENCES

Model Agnostic Supervised Local Explanations

It is demonstrated, on several UCI datasets, that MAPLE is at least as accurate as random forests and that it produces more faithful local explanations than LIME, a popular interpretability system.

A Unified Approach to Interpreting Model Predictions

A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

Explaining prediction models and individual predictions with feature contributions

A sensitivity analysis-based method for explaining prediction models that can be applied to any type of classification or regression model, and which is equivalent to commonly used additive model-specific methods when explaining an additive model.

Unmasking Clever Hans predictors and assessing what machines really learn

The authors investigate how these methods approach learning in order to assess the dependability of their decision making and propose a semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines.

Algorithmic Transparency via Quantitative Input Influence: Theory and Experiments with Learning Systems

The transparency-privacy tradeoff is explored and it is proved that a number of useful transparency reports can be made differentially private with very little addition of noise.

Influence-Directed Explanations for Deep Convolutional Networks

Evaluation demonstrates that influence-directed explanations identify influential concepts that generalize across instances, can be used to extract the “essence” of what the network learned about a class, and isolate individual features the network uses to make decisions and distinguish related classes.

Learning Important Features Through Propagating Activation Differences

DeepLIFT (Deep Learning Important FeaTures), a method for decomposing the output prediction of a neural network on a specific input by backpropagating the contributions of all neurons in the network to every feature of the input, is presented.

The Judicial Demand for Explainable Artificial Intelligence

This essay argues that judges should demand explanations for how machine learning algorithms reach particular decisions, recommendations, or predictions, and should favor the greater involvement of public actors in shaping xAI, which to date has been left in private hands.

Towards better understanding of gradient-based attribution methods for Deep Neural Networks

This work analyzes four gradient-based attribution methods and formally prove conditions of equivalence and approximation between them, and constructs a unified framework which enables a direct comparison, as well as an easier implementation.