• Corpus ID: 233181641

Triplot: model agnostic measures and visualisations for variable importance in predictive models that take into account the hierarchical correlation structure

@article{Pekala2021TriplotMA,
  title={Triplot: model agnostic measures and visualisations for variable importance in predictive models that take into account the hierarchical correlation structure},
  author={Katarzyna Pekala and Katarzyna Woźnica and Przemysław Biecek},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.03403}
}
Abstract One of the key elements of explanatory analysis of a predictive model is to assess the importance of individual variables. Rapid development of the area of predictive model exploration (also called explainable artificial intelligence or interpretable machine learning) has led to the popularization of methods for local (instance level) and global (dataset level) methods, such as Permutational Variable Importance, Shapley Values (SHAP), Local Interpretable Model Explanations (LIME… 

Figures and Tables from this paper

Machine Learning Workflow to Explain Black-Box Models for Early Alzheimer’s Disease Classification Evaluated for Multiple Datasets

To interpret eXtreme Gradient Boosting, Random Forest, and Support Vector Machine black-box models, a workflow based on Shapley values was developed that identified biologically plausible associations with moderate-to-strong correlations with feature importances.

Towards Explainable Meta-learning

This paper proposes techniques developed for eXplainable Artificial Intelligence (XAI) to examine and extract knowledge from black-box surrogate models and is the first paper that shows how post-hoc explainability can be used to improve the meta-learning.

References

SHOWING 1-10 OF 19 REFERENCES

iml: An R package for Interpretable Machine Learning

Given the velocity of research on new machine learning models, it is preferable to have model-agnostic tools which can be applied to a random forest as well as to a neural network, to improve the adoption of machine learning.

All Models are Wrong, but Many are Useful: Learning a Variable's Importance by Studying an Entire Class of Prediction Models Simultaneously

Model class reliance (MCR) is proposed as the range of VI values across all well-performing model in a prespecified class, which gives a more comprehensive description of importance by accounting for the fact that many prediction models, possibly of different parametric forms, may fit the data well.

DALEX: Explainers for Complex Predictive Models in R

  • P. Biecek
  • Computer Science
    J. Mach. Learn. Res.
  • 2018
A consistent collection of explainers for predictive models, a.k.a. black boxes, based on a uniform standardized grammar of model exploration which may be easily extended.

“Why Should I Trust You?”: Explaining the Predictions of Any Classifier

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

An Introduction to Statistical Learning

An introduction to statistical learning provides an accessible overview of the essential toolset for making sense of the vast and complex data sets that have emerged in science, industry, and other sectors in the past twenty years.

Explaining Classifications For Individual Instances

It is demonstrated that the generated explanations closely follow the learned models and a visualization technique is presented that shows the utility of the approach and enables the comparison of different prediction methods.

corrr: Correlations in R

  • 2020

localModel: LIME-Based Explanations with Inter

  • 1907