# Explanations by arbitrated argumentative dispute

@article{Cyras2019ExplanationsBA,
title={Explanations by arbitrated argumentative dispute},
author={Kristijonas Cyras and David Birch and Yike Guo and Francesca Toni and Rajvinder Dulay and Sally Turvey and Daniel Greenberg and Tharindi Hapuarachchi},
journal={Expert Syst. Appl.},
year={2019},
volume={127},
pages={141-156}
}
• Published 1 August 2019
• Computer Science
• Expert Syst. Appl.
25 Citations

## Figures from this paper

Legal and Technical Feasibility of the GDPR’s Quest for Explanation of Algorithmic Decisions: of Black Boxes, White Boxes and Fata Morganas
• Law
European Journal of Risk Regulation
• 2020
By adopting an interdisciplinary approach, the authors explore whether it is possible to translate the EU legal requirements for an explanation into the actual machine learning decision-making, but also whether those limitations can shape the way the legal right is used in practice.
Argumentation and explainable artificial intelligence: a survey
• Computer Science
The Knowledge Engineering Review
• 2021
It is shown how Argumentation can enable Explainability for solving various types of problems in decision-making, justification of an opinion, and dialogues, and approaches that combine Machine Learning and Argumentation Theory, toward more interpretable predictive models are presented.
Interpretability of Gradual Semantics in Abstract Argumentation
• Philosophy, Computer Science
ECSQARU
• 2019
A new property is defined and it is shown that the score of an argument returned by a gradual semantics which satisfies this property can also be computed by aggregating the impact of the other arguments on it, allowing to provide, for each argument in an argumentation framework, a ranking between arguments.
Data-Empowered Argumentation for Dialectically Explainable Predictions
• Computer Science
ECAI
• 2020
This paper advocates a novel transparent paradigm of Data-Empowered Argumentation (DEAr in short) for dialectically explainable predictions, and shows empirically that DEAr is competitive with another transparent model, namely decision trees (DTs), while also naturally providing a form of dialectical explanations.
Monotonicity and Noise-Tolerance in Case-Based Reasoning with Abstract Argumentation (with Appendix)
• Computer Science
KR
• 2021
This paper proves that AA-CBR is not cautiously monotonic, a property frequently considered desirable in the literature, and defines a variation of AA- CBR which is cautiously monotonicity, and proves that this variation is cumulative, rationally monotony, and empowers a principled treatment of noise in "incoherent" casebases.
Explainable Decision Making with Lean and Argumentative Explanations
• Philosophy
ArXiv
• 2022
This work defines ABA frameworks such that “good” decisions are admissible ABA arguments and draw argumentative explanations from dispute trees sanctioning this admissibility, and instantiate the overall framework for explainable decision-making to accommodate connections between goals and decisions in terms of decision graphs incorporating defeasible and non-defeasible information.
Cautious Monotonicity in Case-Based Reasoning with Abstract Argumentation
• Computer Science
ArXiv
• 2020
It is proved that $AA{\text -}CBR_{\succeq}$ is not cautiously monotonic, a property frequently considered desirable in the literature of non-monotonic reasoning.
A top-level model of case-based argumentation for explanation: Formalisation and experiments
• Computer Science
Argument & Computation
• 2021
This paper proposes a formal top-level model of explaining the outputs of machine-learning-based decision-making applications and evaluates it experimentally with three data sets. The model draws on
Argumentative XAI: A Survey
• Computer Science
IJCAI
• 2021
This survey overviews the literature focusing on different types of explanation, different models with which argumentation-based explanations are deployed, different forms of delivery, and different argumentation frameworks they use, and lays out a roadmap for future work.
Paving the way towards counterfactual generation in argumentative conversational agents
• Philosophy
Proceedings of the 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence (NL4XAI 2019)
• 2019
Counterfactual explanations present an effective way to interpret predictions of black-box machine learning algorithms. Whereas there is a significant body of research on counterfactual reasoning in

## References

SHOWING 1-10 OF 60 REFERENCES
Explanation for Case-Based Reasoning via Abstract Argumentation
• Computer Science
COMMA
• 2016
Property of a recently proposed method for CBR, based on instantiated Abstract Argumentation and referred to as AA-CBR, for problems where cases are represented by abstract factors and (positive or negative) outcomes, and an outcome for a new case needs to be established is studied.
Agents that argue and explain classifications
• Computer Science
Autonomous Agents and Multi-Agent Systems
• 2007
A formal argumentation-based model is proposed that constructs arguments in favor of each possible classification of an example, evaluates them, and determines among the conflicting arguments the acceptable ones, and a “valid” classification of the example is suggested.
Abstract Argumentation for Case-Based Reasoning
• Mathematics
KR
• 2016
This work employs abstract argumentation (AA) and proposes a novel methodology for CBR, called AA-CBR, which allows to characterise the computation of an outcome as a dialogical process between a proponent and an opponent.
Providing Arguments in Discussions Based on the Prediction of Human Argumentative Behavior
• Philosophy
AAAI
• 2015
The Predictive and Relevance based Heuristic agent (PRH) is presented, which uses this model with a heuristic that estimates the relevance of possible arguments to the last argument given in order to propose possible arguments.
Argumentation for Explainable Scheduling
• Computer Science
AAAI
• 2019
A novel paradigm using argumentation to empower the interaction between optimization solvers and users is defined, supported by tractable explanations which certify or refute solutions.
Formal Arguments, Preferences, and Natural Language Interfaces to Humans: an Empirical Evaluation
• Philosophy
ECAI
• 2014
It is argued that in order to create argumentation systems, designers must take implicit domain specific knowledge into account and show a correspondence between the acceptability of arguments by human subjects and the justification status prescribed by the formal theory in the majority of the cases.