• Corpus ID: 224803282

Generating Strategic Dialogue for Negotiation with Theory of Mind

  title={Generating Strategic Dialogue for Negotiation with Theory of Mind},
  author={Runzhe Yang and Jingxiao Chen and Karthik Narasimhan},
We propose a framework to integrate the concept of Theory of Mind (ToM) into generating utterances for task-oriented dialogue. Our approach explores the ability to model and infer personality types of opponents, predicts their responses, and uses this information to adapt the agent's high-level strategy in negotiation tasks. We introduce a probabilistic formulation for the first-order theory of mind and test our approach on the CraigslistBargain dataset. Experiments show that our method using… 

Figures and Tables from this paper



Evaluating Persuasion Strategies and Deep Reinforcement Learning methods for Negotiation Dialogue agents

It is suggested that a negotiation strategy that uses persuasion, as well as a strategy that is trained from data using Deep Reinforcement Learning, both lead to an improved win rate against humans, compared to previous rule-based and supervised learning baseline dialogue negotiators.

Opponent Modelling in Persuasion Dialogues

This work relies on an agent's experience to define a mechanism for augmenting an opponent model with information likely to be dialectally related to information already contained in it based on Monte-Carlo simulation.

Opponent Models with Uncertainty for Strategic Argumentation

This paper deals with the issue of strategic argumentation in the setting of Dung-style abstract argumentation theory by using opponent models--recursive representations of an agent's knowledge and beliefs regarding the opponent's knowledge to present three approaches to reasoning.

A Dynamic Strategy Coach for Effective Negotiation

The goal is to assist humans to become better negotiators through a machine-in-the-loop approach that combines machine’'s advantage at data-driven decision-making and human’s language generation ability.

Decoupling Strategy and Generation in Negotiation Dialogues

A modular approach based on coarse dialogue acts (e.g., propose(price=50)) that decouples strategy and generation that can flexibly set the strategy using supervised learning, reinforcement learning, or domain-specific knowledge without degeneracy is proposed.

The Minds of Many: Opponent Modeling in a Stochastic Game

This paper introduces a stereotyping mechanism, which segments the agent population into sub-groups of agents with similar behaviour, which allows larger groups of agents to be modelled robustly and shows that Theory of Mind modelling is useful in many artificial intelligence applications.

Arguing Using Opponent Models

A heuristic that implements one such strategy, built around opponent modelling, and operates by selecting the line of argument that yields maximal utility, based on the opponent's expected response, as computed by the opponent model is proposed.

Bayes-Adaptive Monte-Carlo Planning and Learning for Goal-Oriented Dialogues

An efficient Bayes-adaptive planning algorithm for goal-oriented dialogues is introduced, which combines RNN-based dialogue generation and MCTS-based Bayesian planning in a novel way, leading to robust decision-making under the uncertainty of the other agent's goal.

Reasoning about Pragmatics with Neural Listeners and Speakers

A model for pragmatically describing scenes, in which contrastive behavior results from a combination of inference-driven pragmatics and learned semantics, that succeeds 81% of the time in human evaluations on a referring expression game.

End-to-End Reinforcement Learning of Dialogue Agents for Information Access

This paper proposes KB-InfoBot - a multi-turn dialogue agent which helps users search Knowledge Bases without composing complicated queries by replacing symbolic queries with an induced “soft” posterior distribution over the KB that indicates which entities the user is interested in.