You Impress Me: Dialogue Generation via Mutual Persona Perception

@inproceedings{Liu2020YouIM,
  title={You Impress Me: Dialogue Generation via Mutual Persona Perception},
  author={Qian Liu and Yihong Chen and B. Chen and Jian-Guang Lou and Zixuan Chen and Bin Zhou and Dongmei Zhang},
  booktitle={Annual Meeting of the Association for Computational Linguistics},
  year={2020}
}
Despite the continuing efforts to improve the engagingness and consistency of chit-chat dialogue systems, the majority of current work simply focus on mimicking human-like responses, leaving understudied the aspects of modeling understanding between interlocutors. The research in cognitive science, instead, suggests that understanding is an essential signal for a high-quality chit-chat conversation. Motivated by this, we propose Pˆ2 Bot, a transmitter-receiver based framework with the aim of… 

Figures and Tables from this paper

Partner Personas Generation for Dialogue Response Generation

Incorporating personas information allows diverse and engaging responses in dialogue response generation. Unfortunately, prior works have primarily focused on self personas and have overlooked the

A Personalized Dialogue Generator with Implicit User Persona Detection

This work proposed a novel personalized dialogue generator by detecting an implicit user persona and attempting to model the user’s potential persona and its representation from dialogue history, with no external knowledge.

Learning to Improve Persona Consistency in Multi-party Dialogue Generation via Text Knowledge Enhancement

A multi-party personalized dialogue dataset is constructed and a graph convolution network model (PersonaTKG) with addressee selecting mechanism that integrates personas, dialogue utterances, and external text knowledge in a unified graph is proposed.

Towards Building a Personalized Dialogue Generator via Implicit User Persona Detection

This work proposes a novel personalized dialogue generator by detecting implicit user persona that is more concerned with the user’s persona and outperforms in evaluations.

Structural Characterization for Dialogue Disentanglement

This work specially takes structure factors into account and design a novel model for dialogue disentangling that achieves new state-of-the-art on the Ubuntu IRC benchmark dataset and contributes to dialogue-related comprehension.

Is Your Chatbot Perplexing?: Confident Personalized Conversational Agent for Consistent Chit-Chat Dialogue

A novel framework that processes complex data consisting of personalities and utterances and fine-tuned a large-scale self-attention-based language model is built and a consistent personalized conversational agent (CPC-Agent) is proposed for the framework to achieve accuracy and consistency.

Improving Persona Understanding for Persona-based Dialogue Generation with Diverse Knowledge Selection

This paper designs a dynamic persona fusion mechanism to effectively mine the relevance of dialogue context and persona information, and dynamically predict whether to incorporate persona features in the process of the dialogues.

Generate, Delete and Rewrite: A Three-Stage Framework for Improving Persona Consistency of Dialogue Generation

This work introduces a three-stage framework that employs a generate-delete-rewrite mechanism to delete inconsistent words from a generated response prototype and further rewrite it to a personality-consistent one.

Will I Sound like Me? Improving Persona Consistency in Dialogues through Pragmatic Self-Consciousness

Inspired by social cognition and pragmatics, existing dialogue agents are endow with public self-consciousness on the fly through an imaginary listener to enforce dialogue agents to refrain from uttering contradiction and improve consistency of existing dialogue models.

Group-wise Contrastive Learning for Neural Dialogue Generation

This work introduces contrastive learning into dialogue generation, where the model explicitly perceives the difference between the well-chosen positive and negative utterances, and augments contrastive dialogue learning with group-wise dual sampling.
...

References

SHOWING 1-10 OF 37 REFERENCES

Exploiting Persona Information for Diverse Generation of Conversational Responses

Both automatic and human evaluations show that the proposed memory-augmented architecture to exploit persona information from context and incorporate a conditional variational autoencoder model together to generate diverse and sustainable conversations.

Personalizing Dialogue Agents: I have a dog, do you have pets too?

This work collects data and train models tocondition on their given profile information; and information about the person they are talking to, resulting in improved dialogues, as measured by next utterance prediction.

Training Millions of Personalized Dialogue Agents

A new dataset providing 5 million personas and 700 million persona-based dialogues is introduced and it is shown that, at this scale, training using personas still improves the performance of end-to-end systems.

Generating Persona Consistent Dialogues by Exploiting Natural Language Inference

Experimental results on both human and automatic metrics, including the model-based consistency evaluation, demonstrate that the proposed approach outperforms strong generative baselines, especially in the persona-consistency of generated responses.

Deep Reinforcement Learning for Dialogue Generation

This work simulates dialogues between two virtual agents, using policy gradient methods to reward sequences that display three useful conversational properties: informativity, non-repetitive turns, coherence, and ease of answering.

TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents

A new approach to generative data-driven dialogue systems (e.g. chatbots) called TransferTransfo is introduced which is a combination of a Transfer learning based training scheme and a high-capacity Transformer model which shows strong improvements over the current state-of-the-art end-to-end conversational models.

An Adversarial Learning Framework For A Persona-Based Multi-Turn Dialogue Model

In this paper, we extend the persona-based sequence-to-sequence (Seq2Seq) neural network conversation model to a multi-turn dialogue scenario by modifying the state-of-the-art hredGAN architecture to

Aiming to Know You Better Perhaps Makes Me a More Engaging Dialogue Partner

This work defines a quantitative metric for a plausible motivation for a chit-chat dialogue agent that specifically focuses on discovering information about its interlocutor and proposes an algorithm for the agent to maximize it.

Personalization in Goal-Oriented Dialog

This paper analyzes the shortcomings of an existing end-to-end dialog system based on Memory Networks and proposes modifications to the architecture which enable personalization, and investigates personalization in dialog as a multi-task learning problem.

Learning Personalized End-to-End Goal-Oriented Dialog

A personalized end-to-end model in an attempt to leverage personalization in goal-oriented dialogs that achieves qualitative performance improvements over state-of-the-art methods and outperforms other approaches in terms of task completion rate and user satisfaction.