Explanations by arbitrated argumentative dispute

@article{Cyras2019ExplanationsBA,
  title={Explanations by arbitrated argumentative dispute},
  author={Kristijonas Cyras and David Birch and Yike Guo and Francesca Toni and Rajvinder Dulay and Sally Turvey and Daniel Greenberg and Tharindi Hapuarachchi},
  journal={Expert Syst. Appl.},
  year={2019},
  volume={127},
  pages={141-156}
}

Figures from this paper

Legal and Technical Feasibility of the GDPR’s Quest for Explanation of Algorithmic Decisions: of Black Boxes, White Boxes and Fata Morganas
TLDR
By adopting an interdisciplinary approach, the authors explore whether it is possible to translate the EU legal requirements for an explanation into the actual machine learning decision-making, but also whether those limitations can shape the way the legal right is used in practice.
Argumentation and explainable artificial intelligence: a survey
TLDR
It is shown how Argumentation can enable Explainability for solving various types of problems in decision-making, justification of an opinion, and dialogues, and approaches that combine Machine Learning and Argumentation Theory, toward more interpretable predictive models are presented.
Interpretability of Gradual Semantics in Abstract Argumentation
TLDR
A new property is defined and it is shown that the score of an argument returned by a gradual semantics which satisfies this property can also be computed by aggregating the impact of the other arguments on it, allowing to provide, for each argument in an argumentation framework, a ranking between arguments.
Data-Empowered Argumentation for Dialectically Explainable Predictions
TLDR
This paper advocates a novel transparent paradigm of Data-Empowered Argumentation (DEAr in short) for dialectically explainable predictions, and shows empirically that DEAr is competitive with another transparent model, namely decision trees (DTs), while also naturally providing a form of dialectical explanations.
Monotonicity and Noise-Tolerance in Case-Based Reasoning with Abstract Argumentation (with Appendix)
TLDR
This paper proves that AA-CBR is not cautiously monotonic, a property frequently considered desirable in the literature, and defines a variation of AA- CBR which is cautiously monotonicity, and proves that this variation is cumulative, rationally monotony, and empowers a principled treatment of noise in "incoherent" casebases.
Explainable Decision Making with Lean and Argumentative Explanations
TLDR
This work defines ABA frameworks such that “good” decisions are admissible ABA arguments and draw argumentative explanations from dispute trees sanctioning this admissibility, and instantiate the overall framework for explainable decision-making to accommodate connections between goals and decisions in terms of decision graphs incorporating defeasible and non-defeasible information.
Cautious Monotonicity in Case-Based Reasoning with Abstract Argumentation
TLDR
It is proved that $AA{\text -}CBR_{\succeq}$ is not cautiously monotonic, a property frequently considered desirable in the literature of non-monotonic reasoning.
A top-level model of case-based argumentation for explanation: Formalisation and experiments
This paper proposes a formal top-level model of explaining the outputs of machine-learning-based decision-making applications and evaluates it experimentally with three data sets. The model draws on
Argumentative XAI: A Survey
TLDR
This survey overviews the literature focusing on different types of explanation, different models with which argumentation-based explanations are deployed, different forms of delivery, and different argumentation frameworks they use, and lays out a roadmap for future work.
Paving the way towards counterfactual generation in argumentative conversational agents
Counterfactual explanations present an effective way to interpret predictions of black-box machine learning algorithms. Whereas there is a significant body of research on counterfactual reasoning in
...
1
2
3
...

References

SHOWING 1-10 OF 60 REFERENCES
Explanation for Case-Based Reasoning via Abstract Argumentation
TLDR
Property of a recently proposed method for CBR, based on instantiated Abstract Argumentation and referred to as AA-CBR, for problems where cases are represented by abstract factors and (positive or negative) outcomes, and an outcome for a new case needs to be established is studied.
Agents that argue and explain classifications
TLDR
A formal argumentation-based model is proposed that constructs arguments in favor of each possible classification of an example, evaluates them, and determines among the conflicting arguments the acceptable ones, and a “valid” classification of the example is suggested.
Abstract Argumentation for Case-Based Reasoning
TLDR
This work employs abstract argumentation (AA) and proposes a novel methodology for CBR, called AA-CBR, which allows to characterise the computation of an outcome as a dialogical process between a proponent and an opponent.
Providing Arguments in Discussions Based on the Prediction of Human Argumentative Behavior
TLDR
The Predictive and Relevance based Heuristic agent (PRH) is presented, which uses this model with a heuristic that estimates the relevance of possible arguments to the last argument given in order to propose possible arguments.
Argumentation for Explainable Scheduling
TLDR
A novel paradigm using argumentation to empower the interaction between optimization solvers and users is defined, supported by tractable explanations which certify or refute solutions.
Formal Arguments, Preferences, and Natural Language Interfaces to Humans: an Empirical Evaluation
TLDR
It is argued that in order to create argumentation systems, designers must take implicit domain specific knowledge into account and show a correspondence between the acceptability of arguments by human subjects and the justification status prescribed by the formal theory in the majority of the cases.
...
1
2
3
4
5
...