Regularizing Dialogue Generation by Imitating Implicit Scenarios

@article{Feng2020RegularizingDG,
  title={Regularizing Dialogue Generation by Imitating Implicit Scenarios},
  author={Shaoxiong Feng and Xuancheng Ren and Hongshen Chen and Bin Sun and Kan Li and Xu Sun},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.01893}
}
Human dialogues are scenario-based and appropriate responses generally relate to the latent context knowledge entailed by the specific scenario. To enable responses that are more meaningful and context-specific, we propose to improve generative dialogue systems from the scenario perspective, where both dialogue history and future conversation are taken into account to implicitly reconstruct the scenario knowledge. More importantly, the conversation scenarios are further internalized using… 

ProphetChat: Enhancing Dialogue Generation with Simulation of Future Conversation

A novel dialogue generation framework named ProphetChat is proposed that utilizes the simulated dialogue futures in the inference phase to enhance response generation and demonstrates that ProphetChat can generate better responses over strong baselines, which validates the advantages of incorporating the simulatedDialogue futures.

Precognition in Task-oriented Dialogue Understanding: Posterior Regularization by Future Context

This paper proposes to jointly model historical and future information through the posterior regularization method, by modeling the current utterance and past contexts as prior, and the entire dialogue flow as posterior, and optimize the KL distance between these distributions to regularize the model during training.

Improved Goal Oriented Dialogue via Utterance Generation and Look Ahead

This work shows that intent prediction can be improved by training a deep text-to-text neural model to generate successive user utterances from unlabeled dialogue data, and presents a novel look-ahead approach that uses user utterance generation to improve intent prediction in inference time.

Hierarchical Inductive Transfer for Continual Dialogue Learning

A hierarchical inductive transfer framework is proposed that enables new tasks to use general knowledge in the base adapter without being misled by diverse knowledge in task-specific adapters and obtains comparable performance under deployment-friendly model capacity.

Recent Advances in Deep Learning Based Dialogue Systems: A Systematic Survey

This survey is the most comprehensive and up-to-date one at present in the area of dialogue systems and dialogue-related tasks, extensively covering the popular frameworks, topics, and datasets.

Multi-View Feature Representation for Dialogue Generation with Bidirectional Distillation

A novel training framework that extends the unidirectionaldistillation to the bidirectional distillation that encourages the student and its student peers to co-evolve by exchanging complementary knowledge with each other and improves the model generalization without sacrificing training efficiency.

Generating Relevant and Coherent Dialogue Responses using Self-Separated Conditional Variational AutoEncoders

Self-separated Conditional Variational AutoEncoder (abbreviated as SepaCVAE) is proposed that introduces group information to regularize the latent variables, which enhances CVAE by improving the responses’ relevance and coherence while maintaining their diversity and informativeness.

Stop Filtering: Multi-View Attribute-Enhanced Dialogue Learning

A multi-view attribute-enhanced dialogue learning framework that strengthens the attribute-related features more robustly and comprehensively and can improve the performance of models by enhancing dialogue attributes and fusing view-specific knowledge.

Probing Product Description Generation via Posterior Distillation

An adaptive posterior network based on Transformer architecture that can utilize user-cared information from customer reviews is proposed that is superior to traditional generative models in both automatic indicators and human evaluation.

Towards Standard Criteria for human evaluation of Chatbots: A Survey

A through investigation of 105 papers involving human evaluation for Chatbots proposes five standard criteria along with precise definitions for off-the-shelf settings.

References

SHOWING 1-10 OF 53 REFERENCES

NEXUS Network: Connecting the Preceding and the Following in Dialogue Generation

It is argued that a good response should smoothly connect both the preceding dialogue history and the following conversations, and an auxiliary continuous code space is introduced to sidestep the non-differentiability of discrete natural language tokens.

A Knowledge-Grounded Neural Conversation Model

This paper presents a novel, fully data-driven, and knowledge-grounded neural conversation model aimed at producing more contentful responses, generalizing the widely-used Sequence-to-Sequence (Seq2Seq) approach by conditioning responses on both conversation history and external “facts”, allowing the model to be versatile and applicable in an open-domain setting.

Improving Neural Conversational Models with Entropy-Based Data Filtering

This work presents a method of filtering dialog datasets by removing generic utterances from training data using a simple entropy-based approach that does not require human supervision, and shows that training on datasets filtered this way results in better conversational quality as chatbots learn to output more diverse responses.

A Hierarchical Latent Variable Encoder-Decoder Model for Generating Dialogues

A neural network-based generative architecture, with stochastic latent variables that span a variable number of time steps, that improves upon recently proposed models and that the latent variables facilitate both the generation of meaningful, long and diverse responses and maintaining dialogue state is proposed.

Get The Point of My Utterance! Learning Towards Effective Responses with Multi-Head Attention Mechanism

A novel Multi-Head Attention Mechanism (MHAM) for generative dialog systems, which aims at capturing multiple semantic aspects from the user utterance is proposed, and a regularizer is formulated to force different attention heads to concentrate on certain aspects.

A Diversity-Promoting Objective Function for Neural Conversation Models

This work proposes using Maximum Mutual Information (MMI) as the objective function in neural models, and demonstrates that the proposed MMI models produce more diverse, interesting, and appropriate responses, yielding substantive gains in BLEU scores on two conversational datasets and in human evaluations.

Personalizing Dialogue Agents: I have a dog, do you have pets too?

This work collects data and train models tocondition on their given profile information; and information about the person they are talking to, resulting in improved dialogues, as measured by next utterance prediction.

ReCoSa: Detecting the Relevant Contexts with Self-Attention for Multi-turn Dialogue Generation

Experimental results on both Chinese customer services dataset and English Ubuntu dialogue dataset show that ReCoSa significantly outperforms baseline models, in terms of both metric-based and human evaluations.

Dialogue Generation: From Imitation Learning to Inverse Reinforcement Learning

This work extends a recently proposed adversarial dialogue generation method to an adversarial imitation learning solution and proposes a new reward model for dialogue generation that can provide a more accurate and precise reward signal for generator training.

Adversarial Learning for Neural Dialogue Generation

This work applies adversarial training to open-domain dialogue generation, training a system to produce sequences that are indistinguishable from human-generated dialogue utterances, and investigates models for adversarial evaluation that uses success in fooling an adversary as a dialogue evaluation metric, while avoiding a number of potential pitfalls.
...