# Relation-Based Counterfactual Explanations for Bayesian Network Classifiers

@inproceedings{Albini2020RelationBasedCE, title={Relation-Based Counterfactual Explanations for Bayesian Network Classifiers}, author={Emanuele Albini and Antonio Rago and Pietro Baroni and Francesca Toni}, booktitle={IJCAI}, year={2020} }

We propose a general method for generating counterfactual explanations (CFXs) for a range of Bayesian Network Classifiers (BCs), e.g. singleor multi-label, binary or multidimensional. We focus on explanations built from relations of (critical and potential) influence between variables, indicating the reasons for classifications, rather than any probabilistic information. We show by means of a theoretical analysis of CFXs’ properties that they serve the purpose of indicating (potentially…

## 18 Citations

### Influence-Driven Explanations for Bayesian Network Classifiers

- Computer SciencePRICAI
- 2021

This work demonstrates IDXs' capability to explain various forms of BCs, e.g., naive or multi-label, binary or categorical, and also integrate recent approaches to explanations for BCs from the literature.

### Persuasive Contrastive Explanations for Bayesian Networks

- Computer ScienceECSQARU
- 2021

1 Explanation in Artificial Intelligence is often focused on providing reasons for why a model under consideration and its outcome are correct. Recently, research in explainable machine learning has…

### Realistic Counterfactual Explanations by Learned Relations

- Computer ScienceArXiv
- 2022

This paper proposes a novel approach to realistic counterfactual explanations that preserve relationships between data attributes by directly learns the relationships by a variational auto-encoder without domain knowledge and then learns to disturb the latent space accordingly.

### Formalising the Robustness of Counterfactual Explanations for Neural Networks

- Computer ScienceArXiv
- 2022

This work introduces an abstraction framework based on interval neural networks to verify the robustness of CFXs against a possibly innite set of changes to the model parameters, i.e., weights and biases, and demonstrates how embedding Δ -robustness within existing methods can provide C FXs which are provably robust.

### Counterfactual Shapley Additive Explanations

- Computer ScienceFAccT
- 2022

This work proposes a variant of SHAP, Counterfactual SHAP (CF-SHAP), that incorporates counterfactual information to produce a background dataset for use within the marginal (a.k.a. interventional) Shapley value framework.

### Explaining Causal Models with Argumentation: the Case of Bi-variate Reinforcement

- Computer Science, PhilosophyKR
- 2022

A conceptualisation for generating argumentation frameworks (AFs) from causal models for the purpose of forging explanations for the models’ outputs is introduced, based on reinterpreting desirable properties of semantics of AFs as explanation moulds, which are means for characterising the relations in the causal model argumentatively.

### Persuasive Contrastive Explanations ( Extended Abstract )

- Computer Science
- 2021

Explanation in Artificial Intelligence is often focused on providing reasons for why a model under consideration and its outcome are correct. Recently, research in explainable machine learning has…

### From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI

- Computer ScienceArXiv
- 2022

The so-called Co-12 properties serve as categorization scheme for systematically reviewing the evaluation practice of more than 300 papers published in the last 7 years at major AI and ML conferences that introduce an XAI method.

### Incorporating prior knowledge from counterfactuals into knowledge graph reasoning

- Computer ScienceKnowl. Based Syst.
- 2021

### Argumentative XAI: A Survey

- Computer ScienceIJCAI
- 2021

This survey overviews the literature focusing on different types of explanation, different models with which argumentation-based explanations are deployed, different forms of delivery, and different argumentation frameworks they use, and lays out a roadmap for future work.

## References

SHOWING 1-10 OF 27 REFERENCES

### A Symbolic Approach to Explaining Bayesian Network Classifiers

- Computer ScienceIJCAI
- 2018

We propose an approach for explaining Bayesian network classifiers, which is based on compiling such classifiers into decision functions that have a tractable and symbolic form. We introduce two…

### Explanation Trees for Causal Bayesian Networks

- Computer ScienceUAI
- 2008

This paper explicates the desiderata of an explanation and confront them with the concept of explanation proposed by existing methods, and introduces causal explanation trees, based on the construction of explanation trees using the measure of causal information flow.

### Compiling Bayesian Network Classifiers into Decision Graphs

- Computer ScienceAAAI
- 2019

An algorithm is proposed for compiling Bayesian network classifiers into decision graphs that mimic the input and output behavior of the classifiers, which are tractable and can be exponentially smaller in size than decision trees.

### CXPlain: Causal Explanations for Model Interpretation under Uncertainty

- Computer Science, BiologyNeurIPS
- 2019

The task of providing explanations for the decisions of machine-learning models as a causal learning task is framed, and causal explanation (CXPlain) models that learn to estimate to what degree certain inputs cause outputs in another machine- learning model are trained.

### "Why Should I Trust You?": Explaining the Predictions of Any Classifier

- Computer ScienceHLT-NAACL Demos
- 2016

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

### Anchors: High-Precision Model-Agnostic Explanations

- Computer ScienceAAAI
- 2018

We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, "sufficient" conditions for predictions. We…

### Discrete Bayesian Network Classifiers

- Computer ScienceACM Comput. Surv.
- 2014

This article surveys the whole set of discrete Bayesian network classifiers devised to date, organized in increasing order of structure complexity: naive Bayes, selective naive Baye, seminaive Bayer, one-dependence Bayesian classifiers, k-dependency Bayesianclassifiers, Bayes network-augmented naiveBayes, Markov blanket-based Bayesian Classifier, unrestricted BayesianClassifiers, and Bayesian multinets.

### A review of explanation methods for Bayesian networks

- Computer ScienceThe Knowledge Engineering Review
- 2002

The basic properties that characterise explanation methods are described and the methods developed to date for explanation in Bayesian networks are reviewed.

### A Unified Approach to Interpreting Model Predictions

- Computer ScienceNIPS
- 2017

A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.