• Corpus ID: 237605095

Reinforced Natural Language Interfaces via Entropy Decomposition

  title={Reinforced Natural Language Interfaces via Entropy Decomposition},
  author={Xiaoran Wu},
  • Xiaoran Wu
  • Published 23 September 2021
  • Computer Science
  • ArXiv
In this paper, we study the technical problem of developing conversational agents that can quickly adapt to unseen tasks, learn task-specific communication tactics, and help listeners finish complex, temporally extended tasks. We find that the uncertainty of language learning can be decomposed to an entropy term and a mutual information term, corresponding to the structural and functional aspect of language, respectively. Combined with reinforcement learning, our method automatically requests… 

Figures from this paper


Natural Language Does Not Emerge ‘Naturally’ in Multi-Agent Dialog
This paper presents a sequence of ‘negative’ results culminating in a ‘positive’ one – showing that while most agent-invented languages are effective, they are decidedly not interpretable or compositional.
Learning Multiagent Communication with Backpropagation
A simple neural model is explored, called CommNet, that uses continuous communication for fully cooperative tasks and the ability of the agents to learn to communicate amongst themselves is demonstrated, yielding improved performance over non-communicative agents and baselines.
Bootstrapping a Neural Conversational Agent with Dialogue Self-Play, Crowdsourcing and On-Line Reinforcement Learning
This paper discusses the advantages of this approach for industry applications of conversational agents, wherein an agent can be rapidly bootstrapped to deploy in front of users and further optimized via interactive learning from actual users of the system.
A User Simulator for Task-Completion Dialogues
A new, publicly available simulation framework, where the simulator, designed for the movie-booking domain, leverages both rules and collected data, and several agents are demonstrated and the procedure to add and test your own agent is detailed.
Neural Approaches to Conversational AI: Question Answering, Task-oriented Dialogues and Social Chatbots
This monograph is the first survey of neural approaches to conversational AI that targets Natural Language Processing and Information Retrieval audiences and provides a unified view, as well as a detailed presentation of the important ideas and insights needed to understand and create modern dialogue agents.
End-to-End Reinforcement Learning of Dialogue Agents for Information Access
This paper proposes KB-InfoBot -- a multi-turn dialogue agent which helps users search Knowledge Bases (KBs) without composing complicated queries. Such goal-oriented dialogue agents typically need
Emergent Compositionality in Signaling Games
Experimental evidence is provided suggesting that incremental pragmatic reasoning may lead to compositional referring behavior in both computational agents and in humans.
Deep Reinforcement Learning for Dialogue Generation
This work simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity, non-repetitive turns, coherence, and ease of answering.
Semantically Conditioned LSTM-based Natural Language Generation for Spoken Dialogue Systems
A statistical language generator based on a semantically controlled Long Short-term Memory (LSTM) structure that can learn from unaligned data by jointly optimising sentence planning and surface realisation using a simple cross entropy training criterion, and language variation can be easily achieved by sampling from output candidates.
Transferable Dialogue Systems and User Simulators
The goal is to develop a modelling framework that can incorporate new dialogue scenarios through self-play between the two agents that is highly effective in bootstrapping the performance of the two agent in transfer learning.