A probabilistic argumentation framework for reinforcement learning agents

  title={A probabilistic argumentation framework for reinforcement learning agents},
  author={R{\'e}gis Riveret and Yang Gao and Guido Governatori and Antonino Rotolo and Jeremy V. Pitt and Giovanni Sartor},
  journal={Autonomous Agents and Multi-Agent Systems},
A bounded-reasoning agent may face two dimensions of uncertainty: firstly, the uncertainty arising from partial information and conflicting reasons, and secondly, the uncertainty arising from the stochastic nature of its actions and the environment. This paper attempts to address both dimensions within a single unified framework, by bringing together probabilistic argumentation and reinforcement learning. We show how a probabilistic rule-based argumentation framework can capture Markov decision… 

A probabilistic deontic argumentation framework

A Deontic Argumentation Framework Towards Doctrine Reification

It is shown then that bivalent statement labellings can fall short to address normative completeness, and for this reason, it is proposed to use trivalent labelling semantics.

Towards Understanding and Arguing with Classifiers: Recent Progress

A novel deep but tractable model for conditional probability distributions that can harness the expressive power of universal function approximators such as neural networks while still maintaining a wide range of tractable inference routines is reviewed.

Symbolic Explanation of Affinity-Based Reinforcement Learning Agents with Markov Models

This work has developed a policy regularization method that asserts the global intrinsic qualities of learned strategies that provide a means of reasoning about a policy’s behavior, thus making it inherently interpretable.

On probabilistic argumentation and subargument-completeness

Probabilistic argumentation combines probability theory and formal models of argumentation. Given an argumentation graph where vertices are arguments and edges are attacks or supports between

Machine Learning for Utility Prediction in Argument-Based Computational Persuasion

Two ML methods, EAI and EDS, are developed that leverage information coming from the users to predict their utilities and are evaluated in a simulation setting and in a realistic case study concerning healthy eating habits.

Nova: Value-based Negotiation of Norms

This work proposes an agent-based negotiation framework, where agents’ requirements are represented as values, and an agent revises the nMAS specification to promote its values by executing a set of norm revision rules that incorporate ontology-based reasoning.

MARLeME: A Multi-Agent Reinforcement Learning Model Extraction Library

This work introduces MARLeME: a MARL model extraction library, designed to improve explainability of MARL systems by approximating them with symbolic models.

Reinforcement Learning Your Way: Agent Characterization through Policy Regularization

The method guides the agents’ behaviour during learning, which results in an intrinsic characterization; it connects the learning process with model explanation, and is intended to employ to develop agents that optimize individual financial customers’ investment portfolios based on their spending personalities.

Preemptive Anomaly Prediction in IoT Components (short paper)

An approach combining reliability quantification and reinforcement learning to build a mechanism that can achieve a predictive maintenance for the components of an IoT system such as devices and links named DeltaIoT is proposed.



Probabilistic rule-based argumentation for norm-governed learning agents

This paper proposes an approach to investigate norm-governed learning agents which combines a logic-based formalism with an equation-based counterpart, which enables the reasoning of such agents and their interactions using argumentation, and to capture systemic features using equations.

A labelling framework for probabilistic argumentation

This work investigates a labelling-oriented framework encompassing a basic setting for rule-based argumentation and its (semi-) abstract account and provides a systematic treatment of various kinds of uncertainty and of their relationships and allows us to back or question assertions from the literature.

Probabilistic abstract argumentation: an investigation with Boltzmann machines

The construction of neuro-argumentative systems based on probabilistic argumentation is associated with a model of abstract argumentation and the graphical model of Boltzmann machines in order to couple the computational advantages of learning and massive parallel computation.

Probabilistic Reasoning with Abstract Argumentation Frameworks

A general framework to measure the amount of conflict of inconsistent assessments and provide a method for inconsistency-tolerant reasoning is presented.

Argumentation Accelerated Reinforcement Learning for Cooperative Multi-Agent Systems

This work defines AARL via argumentation and proves that it can coordinate independent cooperative agents that have a shared goal but need to perform different actions, and shows that it significantly improves upon standard RL.

Neuro-Symbolic Agents: Boltzmann Machines and Probabilistic Abstract Argumentation with Sub-Arguments

An abstract argumentation framework accounting for sub-arguments, but where the content of (sub-)arguments is left unspecified is considered, to make the ideas as widely applicable as possible.

An abstract framework for argumentation with structured arguments

  • H. Prakken
  • Philosophy, Computer Science
    Argument Comput.
  • 2010
An abstract framework for structured arguments is presented, which instantiates Dung's (‘On the Acceptability of Arguments and its Fundamental Role in Nonmonotonic Reasoning, Logic Programming, and

A quantitative approach to belief revision in structured probabilistic argumentation

This paper proposes the QAFO (Quantitative Annotation Function-based Operators) class of operators, a subclass of AFO, and goes on to study the complexity of several problems related to their specification and application in revising knowledge bases.

Argumentation in Artificial Intelligence

This book presents an overview of key concepts in argumentation theory and of formal models of argumentation in AI, beginning with a review of the foundational issues in argueation and formal argument modeling, and moving to more specialized topics, such as algorithmic issues, argumentations in multi-agent systems, and strategic aspects of argumentations.