Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking

@article{Tolomei2017InterpretablePO,
  title={Interpretable Predictions of Tree-based Ensembles via Actionable Feature Tweaking},
  author={Gabriele Tolomei and Fabrizio Silvestri and Andrew Haines and Mounia Lalmas},
  journal={Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
  year={2017}
}
  • Gabriele TolomeiF. Silvestri M. Lalmas
  • Published 20 June 2017
  • Computer Science
  • Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
Machine-learned models are often described as "black boxes". In many real-world applications however, models may have to sacrifice predictive power in favour of human-interpretability. When this is the case, feature engineering becomes a crucial task, which requires significant and time-consuming human effort. Whilst some features are inherently static, representing properties that cannot be influenced (e.g., the age of an individual), others capture characteristics that could be adjusted (e.g… 

Figures and Tables from this paper

Generating Actionable Interpretations from Ensembles of Decision Trees

This paper presents a technique that exploits the feedback loop originated from the internals of any ensemble of decision trees to offer recommendations for transforming a predicted instance into a inline-formula-labelled instance, and tests confirm that the solution is able to suggest changes to feature values that help interpreting the rationale of model predictions.

LIMEtree: Interactively Customisable Explanations Based on Local Surrogate Multi-output Regression Trees

This work introduces a model-agnostic and post-hoc local explainability technique for black-box predictions called LIMEtree, which employs surrogate multi-output regression trees and can produce a range of diverse explanation types, including contrastive and counterfactual explanations praised in the literature.

Explaining Predictions from Tree-based Boosting Ensembles

This work focuses on generating local explanations about individual predictions for tree-based ensembles, specifically Gradient Boosting Decision Trees (GBDTs), and wishes to extend this method for GBDTs.

Post-hoc explanation of black-box classifiers using confident itemsets

An exact counterfactual-example-based approach to tree-ensemble models interpretability

A positive answer is found for any model that enters the category of tree ensemble models, which encompasses a wide range of models dedicated to massive heterogeneous industrial data processing such as XGBoost, Catboost, Lightgbm, random forests, which could derive an exact geometrical characterisation of the decision regions under the form of a collection of multidimensional intervals.

Principles and Practice of Explainable Machine Learning

A survey is undertaken to help industry practitioners (but also data scientists more broadly) understand the field of explainable machine learning better and apply the right tools, and discusses the main developments.

Estimation and Interpretation of Machine Learning Models with Customized Surrogate Model

The significance of such a novel technique is that data science developers will not have to perform strenuous hands-on activities to undertake feature engineering tasks and end-users will have the graphical-based explanation of complex models in a comprehensive way—consequently building trust in a machine.

ReLACE: Reinforcement Learning Agent for Counterfactual Explanations of Arbitrary Predictive Models

This work forms the problem of crafting CFs as a sequential decisionmaking task and then finds the optimal CFs via deep reinforcement learning (DRL) with discrete-continuous hybrid action space and develops an algorithm to extract explainable decision rules from the DRL agent’s policy, so as to make the process of generating CFs itself transparent.

LionForests: Local Interpretation of Random Forests through Path Selection

This paper provides a sequence of actions for shedding light on the predictions of the misjudged family of tree ensemble algorithms, using classic unsupervised learning techniques and an enhanced similarity metric to wander among transparent trees inside a forest following breadcrumbs, the interpretable essence of tree ensembles arises.

Interpretable Narrative Explanation for ML Predictors with LP: A Case Study for XAI

A general framework where symbolic and sub-symbolic approaches could fruitfully combine to produce intelligent behaviour in AI applications is sketched, and a narrative explanation for ML predictors is focused on, exploiting the logical knowledge obtained translating decision tree predictors into logical programs.
...

References

SHOWING 1-10 OF 27 REFERENCES

Model-Agnostic Interpretability of Machine Learning

This paper argues for explaining machine learning predictions using model-agnostic approaches, treating the machine learning models as black-box functions, which provide crucial flexibility in the choice of models, explanations, and representations, improving debugging, comparison, and interfaces for a variety of users and models.

“Why Should I Trust You?”: Explaining the Predictions of Any Classifier

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

Optimal Action Extraction for Random Forests and Boosted Trees

The NP-hardness of the optimal action extraction problem for ATMs is proved and this problem is formulated in an integer linear programming formulation which can be efficiently solved by existing packages.

Post-Analysis of Learned Rules

The proposed technique is general and highly interactive, and will be particularly useful in data mining and data analysis, where the rules may change over time and it is important to know what the changes are.

Intriguing properties of neural networks

It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.

Efficient Action Extraction with Many-to-Many Relationship between Actions and Features

Action sets are extracted from a classifier for which the total execution cost is minimal based on many-to-many relationship between actions and features, and the latter will be applicable to more real-world problems.

Postprocessing decision trees to extract actionable knowledge

A novel algorithm is presented that suggest actions to change customers from an undesired status to a desired one while maximizing objective function: the expected net profit.

Domain-Driven Actionable Knowledge Discovery in the Real World

This paper proposes a practical perspective, referred to as domain-driven in-depth pattern discovery (DDID-PD), which presents a domain- driven view of discovering knowledge satisfying real business needs and demonstrates its deployment in mining actionable trading strategies in Australian Stock Exchange data.

A relevance model based filter for improving ad quality

This paper improves a model that applies collaborative filtering on click data by training a filter that has been trained to predict pure relevance, and finds that using features based on the \emph{organic} results improves the relevance based filter's performance.

Extracting Actionable Knowledge from Decision Trees

Novel algorithms that suggest actions to change customers from an undesired status to a desired one while maximizing an objective function: the expected net profit are presented.