GLocalX - From Local to Global Explanations of Black Box AI Models

@article{Setzu2021GLocalXF,
  title={GLocalX - From Local to Global Explanations of Black Box AI Models},
  author={Mattia Setzu and Riccardo Guidotti and Anna Monreale and Franco Turini and Dino Pedreschi and Fosca Giannotti},
  journal={ArXiv},
  year={2021},
  volume={abs/2101.07685}
}

Figures and Tables from this paper

Towards Knowledge-driven Distillation and Explanation of Black-box Models
TLDR
A knowledge-driven distillation approach to explaining black-box models by means of perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning.
Logic Programming for XAI: A technical perspective1
TLDR
This work proposes using Constraint Logic Programming to construct explanations that incorporate prior knowledge, as well as Meta-Reasoning to track model and explanation changes over time.
A Robust Unsupervised Ensemble of Feature-Based Explanations using Restricted Boltzmann Machines
TLDR
This paper proposes a technique for aggregating the feature attributions of different explanatory algorithms using Restricted Boltzmann Machines to achieve a more reliable and robust interpretation of deep neural networks.
A Quantitative Evaluation of Global, Rule-Based Explanations of Post-Hoc, Model Agnostic Methods
TLDR
This study proposes a novel comparative approach to evaluate and compare the rulesets produced by five model-agnostic, post-hoc rule extractors by employing eight quantitative metrics, using the Friedman test to check whether a method consistently performed better than the others, in terms of the selected metrics, and could be considered superior.
Model learning with personalized interpretability estimation (ML-PIE)
TLDR
This paper uses a bi-objective evolutionary algorithm to synthesize models with trade-offs between accuracy and a user-specific notion of interpretability, and finds that the users tend to prefer models found using the proposed approach overmodels found using non-personalized interpretability indices.
Classification of Explainable Artificial Intelligence Methods through Their Output Formats
TLDR
This systematic review aimed to organise the existing XAI methods into a hierarchical classification system that builds upon and extends existing taxonomies by adding a significant dimension—the output formats.
An Empirical Investigation Into Deep and Shallow Rule Learning
TLDR
This paper empirically compare deep and shallow rule sets that have been optimized with a uniform general mini-batch based optimization algorithm and finds that deep rule networks outperformed their shallow counterparts, which is taken as an indication that it is worth-while to devote more efforts to learning deep rule structures from data.
Understanding Diversity in Human-AI Data: What Cognitive Style Disaggregation Reveals
TLDR
It was found that participants’ cognitive styles not only clustered by their gender, but they also clustered across different age groups, and across all 5 cogniive style spectra, although there were instances where applying the guidelines closed inclusivity issues, there were also stubborn inclusiveness issues and inadvertent introductions of inclusivism issues.
Diagnosing AI Explanation Methods with Folk Concepts of Behavior
When explaining AI behavior to humans, how is the communicated information being comprehended by the human explainee, and does it match what the explanation attempted to communicate? When can we say
Benchmarking and Survey of Explanation Methods for Black Box Models
TLDR
A categorization of explanation methods based on the type of explanation returned is provided, and a visual comparison among explanations is shown and a quantitative benchmarking is shown.
...
...

References

SHOWING 1-10 OF 42 REFERENCES
Meaningful Explanations of Black Box AI Decision Systems
TLDR
This work focuses on the urgent open challenge of how to construct meaningful explanations of opaque AI/ML systems, introducing the local-toglobal framework for black box explanation, articulated along three lines: the language for expressing explanations in terms of logic rules, statistical and causal interpretation, and the inference of local explanations for revealing the decision rationale for a specific case.
A Unified Approach to Interpreting Model Predictions
TLDR
A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
TLDR
This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.
Explainable AI: The New 42?
TLDR
Explainable AI is not a new field but the evolution of formal reasoning architectures to incorporate principled probabilistic reasoning helped address the capture and use of uncertain knowledge.
Factual and Counterfactual Explanations for Black Box Decision Making
TLDR
A local rule-based explanation method, providing faithful explanations of the decision made by a black box classifier on a specific instance, outperforms existing approaches in terms of the quality of the explanations and of the accuracy in mimicking the black box.
Explaining Multi-label Black-Box Classifiers for Health Applications
TLDR
MarlENA is proposed, a model-agnostic method which explains multi-label black box decisions and performs well in terms of mimicking the black box behavior while gaining at the same time a notable amount of interpretability through compact decision rules, i.e. rules with limited length.
Explanation in Artificial Intelligence: Insights from the Social Sciences
Anchors: High-Precision Model-Agnostic Explanations
We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, “sufficient” conditions for predictions. We
Interpretable Decision Sets: A Joint Framework for Description and Prediction
TLDR
This work proposes interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable, and provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency.
...
...