A Symbolic Approach to Explaining Bayesian Network Classifiers

  title={A Symbolic Approach to Explaining Bayesian Network Classifiers},
  author={Andy Shih and Arthur Choi and Adnan Darwiche},
We propose an approach for explaining Bayesian network classifiers, which is based on compiling such classifiers into decision functions that have a tractable and symbolic form. We introduce two types of explanations for why a classifier may have classified an instance positively or negatively and suggest algorithms for computing these explanations. The first type of explanation identifies a minimal set of the currently active features that is responsible for the current classification, while… 

Figures and Tables from this paper

Formal Verification of Bayesian Network Classifiers

It is shown in this paper that this approach based on first compiling a given classifier into a tractable representation called an Ordered Decision Diagram also gives the ability to verify the behavior of classifiers.

Compiling Bayesian Network Classifiers into Decision Graphs

An algorithm is proposed for compiling Bayesian network classifiers into decision graphs that mimic the input and output behavior of the classifiers, which are tractable and can be exponentially smaller in size than decision trees.

Influence-Driven Explanations for Bayesian Network Classifiers

This work demonstrates IDXs' capability to explain various forms of BCs, e.g., naive or multi-label, binary or categorical, and also integrate recent approaches to explanations for BCs from the literature.

Relation-Based Counterfactual Explanations for Bayesian Network Classifiers

It is proved empirically for various BCs that CFXs provide useful information in real world settings, and it is shown that they have inherent advantages over existing explanation methods in the literature.

Explainable AI for Classification using Probabilistic Logic Inference

This work identifies decisive features that are responsible for a classification as explanations and produces results similar to the ones found by SHAP, a state of the art Shapley Value based method.

Consistent Sufficient Explanations and Minimal Local Rules for explaining regression and classification models

This work introduces an accurate and fast estimator of the conditional probability of maintaining the same prediction via random Forests for any data (X, Y) and shows its efficiency through a theoretical analysis of its consistency.

Probabilistic Sufficient Explanations

Probabilistic sufficient explanations are introduced, which formulate explaining an instance of classification as choosing the “simplest” subset of features such that only observing those features is “sufficient” to explain the classification.

On the Tractability of Explaining Decisions of Classifiers

This work investigates the computational complexity of providing a formally-correct and minimal explanation of a decision taken by a classifier and shows that tractable classes coincide for abductive and contrastive explanations in the constrained or unconstrained settings.

Interpretability of Bayesian Network Classifiers: OBDD Approximation and Polynomial Threshold Functions

It is shown that for Tree Augmented Naive Bayes Classifiers (TAN) there is an efficiently computable approximation of polynomial size.

Explaining Neural Network Decisions Is Hard

It is shown that no algorithm will provably find small relevant sets of input features even if they exist, and that approximating this function even in a single point up to any non-trivial approximation factor is NP-hard.



Reasoning about Bayesian Network Classifiers

This paper presents an algorithm for converting any naive Bayes classifier into an ODD, and it is shown theoretically and experimentally that this algorithm can give us an O DD that is tractable in size cvcn given an intractable number of instances.

Optimal Feature Selection for Decision Robustness in Bayesian Networks

This work proposes the first algorithm to compute the expected same-decision probability for general Bayesian network classifiers, based on compiling the network into a tractable circuit representation, and develops a search algorithm for optimal feature selection that utilizes efficient incremental circuit modifications.

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

Monotonicity in Bayesian Networks

It is shown that establishing whether a network exhibits any of these properties of monotonicity is coNPPP-complete in general, and remains coNP-complete for poly-trees.

Compiling Probabilistic Graphical Models Using Sentential Decision Diagrams

A novel and efficient way to encode the factors of a given model directly to SDDs, bypassing the CNF representation is described, which is as effective as those based on d-DNNFs, and at times, orders-of-magnitude faster.

Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance

This work proposes anchor-LIME (aLIME), a model-agnostic technique that produces high-precision rule-based explanations for which the coverage boundaries are very clear and is compared to linear LIME with simulated experiments, and demonstrates the flexibility of aLIME with qualitative examples from a variety of domains and tasks.

Algorithms and Applications for the Same-Decision Probability

It is proved that computing the non-myopic value of information is complete for the same complexity class as computing the SDP, and it is shown that the recently introduced notion, Same-Decision Probability, can be useful as both a stopping and a selection criterion.

Anchors: High-Precision Model-Agnostic Explanations

We introduce a novel model-agnostic system that explains the behavior of complex models with high-precision rules called anchors, representing local, "sufficient" conditions for predictions. We

Streaming Weak Submodularity: Interpreting Neural Networks on the Fly

This paper casts interpretability of black-box classifiers as a combinatorial maximization problem and proposes an efficient streaming algorithm to solve it subject to cardinality constraints and provides a constant factor approximation guarantee for this general class of functions.

Same-decision probability: A confidence measure for threshold-based decisions