Corpus ID: 219890543

Explanatory predictions with artificial neural networks and argumentation

  title={Explanatory predictions with artificial neural networks and argumentation},
  author={O. Cocarascu and K. Cyras and F. Toni},

Topics from this paper

Representation, Justification and Explanation in a Value Driven Agent: An Argumentation-Based Approach
This paper takes a Value Driven Agent as an example, explicitly representing implicit knowledge of a machine learning-based autonomous agent and using this formalism to justify and explain the decisions of the agent, in terms of a typical argumentation formalism, Assumption-based Argumentation (ABA). Expand
A top-level model of case-based argumentation for explanation: Formalisation and experiments
This paper proposes a formal top-level model of explaining the outputs of machine-learning-based decision-making applications. The model draws on AI & law research on argumentation with cases, whichExpand
Argumentation and explainable artificial intelligence: a survey
Abstract Argumentation and eXplainable Artificial Intelligence (XAI) are closely related, as in the recent years, Argumentation has been used for providing Explainability to AI. Argumentation canExpand
Bayesian Pruned Random Rule Foams for XAI
  • A. K. Panda, B. Kosko
  • Computer Science
  • 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)
  • 2021
A random rule foam grows and combines several independent fuzzy rule-based systems by randomly sampling input-output data from a trained deep neural classifier. The random rule foam defines anExpand
Monotonicity and Noise-Tolerance in Case-Based Reasoning with Abstract Argumentation (with Appendix)
It is proved that AA-CBR is not cautiously monotonic, a property frequently considered desirable in the literature, and a variation of AA- CBR is defined which is cautiously monotonicity, and empowers a principled treatment of noise in “incoherent” casebases. Expand
Altruist: Argumentative Explanations through Local Interpretations of Predictive Models
This study introduces a meta-explanation methodology that will provide truthful interpretations, in terms of feature importance, to the end user through argumentation and can be used as an evaluation or selection tool for multiple interpretation techniques based on feature importance. Expand
Bayesian Rule Posteriors from a Rule Foam
A rule foam converts a neural black-box classifier into a probabilistic rule-based ontology where a fresh Bayesian posterior describes the relative rule firings for each input pattern. The rulesExpand
Cautious Monotonicity in Case-Based Reasoning with Abstract Argumentation
It is proved that $AA{\text -}CBR_{\succeq}$ is not cautiously monotonic, a property frequently considered desirable in the literature of non-monotonic reasoning. Expand
Data-Empowered Argumentation for Dialectically Explainable Predictions
This paper advocates a novel transparent paradigm of Data-Empowered Argumentation (DEAr in short) for dialectically explainable predictions, and shows empirically that DEAr is competitive with another transparent model, namely decision trees (DTs), while also naturally providing a form of dialectical explanations. Expand
Explanation from Specification
This work formulate an approach where the type of explanation produced is guided by a specification, and two examples are discussed: explanations for Bayesian networks using the theory of argumentation, and explanations for graph neural networks. Expand