Share This Author
Argflow: A Toolkit for Deep Argumentative Explanations for Neural Networks
Argflow is presented, a toolkit enabling the generation of a variety of ‘deep’ argumentative explanations (DAXs) for outputs of NNs on classification tasks.
Argumentative XAI: A Survey
- Kristijonas vCyras, Antonio Rago, Emanuele Albini, P. Baroni, F. Toni
- Computer ScienceIJCAI
- 24 May 2021
This survey overviews the literature focusing on different types of explanation, different models with which argumentation-based explanations are deployed, different forms of delivery, and different argumentation frameworks they use, and lays out a roadmap for future work.
Relation-Based Counterfactual Explanations for Bayesian Network Classifiers
It is proved empirically for various BCs that CFXs provide useful information in real world settings, and it is shown that they have inherent advantages over existing explanation methods in the literature.
Counterfactual Shapley Additive Explanations
This work proposes a variant of SHAP, Counterfactual SHAP (CF-SHAP), that incorporates counterfactual information to produce a background dataset for use within the marginal (a.k.a. interventional) Shapley value framework.
PageRank as an Argumentation Semantics
Influence-Driven Explanations for Bayesian Network Classifiers
This work demonstrates IDXs' capability to explain various forms of BCs, e.g., naive or multi-label, binary or categorical, and also integrate recent approaches to explanations for BCs from the literature.
DAX: Deep Argumentative eXplanation for Neural Networks
- Emanuele Albini, Piyawat Lertvittayakumjorn, Antonio Rago, F. Toni
- Computer ScienceArXiv
- 10 December 2020
This work proposes a methodology for explaining NNs, providing transparency about their inner workings, by utilising computational argumentation as the scaffolding underpinning Deep Argumentative eXplanations (DAXs).
Interpreting and explaining pagerank through argumentation semantics
This paper proposes several types of argument-based explanations for PageRank, each of which focuses on different aspects of the algorithm and uncovers information useful for the comprehension of its results.
Forging Argumentative Explanations from Causal Models
The conceptualisation is based on reinterpreting properties of semantics of AFs as explanation moulds, which are means for characterising argumentative relations, and shows how the extracted bipolar AFs may be used as relation-based explanations for the outputs of causal models.
Explaining PageRank through Argumentation
- Emanuele Albini
- Computer Science
Re-interpreting PageRank as an argumentation semantics for a bipolar argumentation framework empowers its explainability and proposes several types of explanation, each of which focuses on different aspects of the algorithm and uncovers information useful for the comprehension of its results.