• Publications
  • Influence
Relation-Based Counterfactual Explanations for Bayesian Network Classifiers
TLDR
We propose a general method for generating counterfactual explanations (CFXs) for a range of Bayesian Network Classifiers (BCs), e.g. singleor multi-label, binary or multidimensional. Expand
DAX: Deep Argumentative eXplanation for Neural Networks
TLDR
We propose a methodology for explaining NNs, providing transparency about their inner workings, by utilising computational argumentation (a form of symbolic AI offering reasoning abstractions for a variety of settings where opinions matter) as the scaffolding underpinning Deep Argumentative eXplanations (DAXs). Expand
Argflow: A Toolkit for Deep Argumentative Explanations for Neural Networks
In recent years, machine learning (ML) models have been successfully applied in a variety of real-world applications. However, they are often complex and incomprehensible to human users. This canExpand
Argumentative XAI: A Survey
TLDR
In this survey we overview XAI approaches built using methods from the field of computational argumentation, leveraging its wide array of reasoning abstractions and explanation delivery methods. Expand
Explaining PageRank through Argumentation
In this paper we show how re-interpreting PageRank as an argumentation semantics for a bipolar argumentation framework empowers its explainability. To this purpose we propose several types ofExpand
Deep Argumentative Explanations
Despite the recent, widespread focus on eXplainable AI (XAI), explanations computed by XAI methods tend to provide little insight into the functioning of Neural Networks (NNs). We propose a novelExpand
Influence-Driven Explanations for Bayesian Network Classifiers
TLDR
We propose the novel formalism of in uence-driven explanations (IDXs) for discrete Bayesian network classifiers (BCs), targeting greater transparency of their inner workings by including intermediate variables in explanations. Expand