Reasoning in Dialog: Improving Response Generation by Context Reading Comprehension

@inproceedings{Chen2020ReasoningID,
  title={Reasoning in Dialog: Improving Response Generation by Context Reading Comprehension},
  author={Xiuying Chen and Zhi Cui and Jiayi Zhang and Chen Wei and Jianwei Cui and Bin Wang and Dongyan Zhao and Rui Yan},
  booktitle={AAAI Conference on Artificial Intelligence},
  year={2020}
}
In multi-turn dialog, utterances do not always take the full form of sentences (Carbonell 1983), which naturally makes understanding the dialog context more difficult. However, it is essential to fully grasp the dialog context to generate a reasonable response. Hence, in this paper, we propose to improve the response generation performance by examining the model's ability to answer a reading comprehension question, where the question is focused on the omitted information in the dialog… 

Figures and Tables from this paper

CGIM: A Cycle Guided Interactive Learning Model for Consistency Identification in Task-oriented Dialogue

This work aims to solve CI-ToD task by introducing an explicit interaction paradigm, Cycle Guided Interactive learning Model (CGIM), which achieves to make information exchange explicitly from all the three tasks via a cycle interaction manner.

Target-aware Abstractive Related Work Generation with Contrastive Learning

An abstractive target-aware related work generator (TAG) is proposed, which models the relationships between reference papers and the target paper with target-centered attention mechanisms and brings substantial improvements over several strong baselines in terms of automatic and tailored human evaluations.

The Style-Content Duality of Attractiveness: Learning to Write Eye-Catching Headlines via Disentanglement

A Disentanglement-based Attractive Headline Generator (DAHG) that generates headline which captures the attractive content following the attractive style and takes the polished document as input to generate headline under the guidance of the attractive Style.

Emotion Conditioned Creative Dialog Generation

We present a DialGPT based model for generating creative dialog responses that are conditioned based on one of the following emotions: anger, disgust, fear, happiness, pain, sadness and surprise. Our

EZInterviewer: To Improve Job Interview Performance with Mock Interview Generator

A novel application named EZInterviewer is proposed, which aims to learn from the online interview data and provides mock interview services to the job seekers and reduces the number of parameters that rely on interview dialogs by disentangling the knowledge selector and dialog generator.

Logical Reasoning for Task Oriented Dialogue Systems

This work proposes a novel method to fine-tune pretrained transformer models such as Roberta and T5, to reason over a set of facts in a given dialogue context, and shows that the transformer based model can perform logical reasoning to answer questions when the dialogue context contains all the required information.

Enhancing the Open-Domain Dialogue Evaluation in Latent Space

Experimental results on two real-world dialogue datasets confirm the superiority of the self-supervised method for open-domain dialogue evaluation, where both Pearson and Spearman correlations with human judgments outperform all baselines.

References

SHOWING 1-10 OF 58 REFERENCES

Learning an Effective Context-Response Matching Model with Self-Supervised Tasks for Retrieval-based Dialogues

This paper proposes learning a context-response matching model with auxiliary self-supervised tasks designed for the dialogue data based on pre-trained language models and jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.

Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning

This paper introduces Cosmos QA, a large-scale dataset of 35,600 problems that require commonsense-based reading comprehension, formulated as multiple-choice questions, and proposes a new architecture that improves over the competitive baselines.

CoQA: A Conversational Question Answering Challenge

CoQA is introduced, a novel dataset for building Conversational Question Answering systems and it is shown that conversational questions have challenging phenomena not present in existing reading comprehension datasets (e.g., coreference and pragmatic reasoning).

Are Training Samples Correlated? Learning to Generate Dialogue Responses with Multiple References

Experimental results show that the proposed model can effectively improve the quality of response and outperform existing neural dialogue models on both automatic and human evaluations.

Coarse-to-Fine Question Answering for Long Documents

A framework for question answering that can efficiently scale to longer documents while maintaining or even improving performance of state-of-the-art models is presented and sentence selection is treated as a latent variable trained jointly from the answer only using reinforcement learning.

DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation

It is shown that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems.

Learning End-to-End Goal-Oriented Dialog

It is shown that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations and be compared to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge.

Improving Open-Domain Dialogue Systems via Multi-Turn Incomplete Utterance Restoration

A large-scale multi-turn dataset is collected and manually labeled with the explicit relation between an utterance and its context and a “pick-and-combine” model is proposed to restore the incomplete utterance from its context.

Query-Reduction Networks for Question Answering

Query-Reduction Network (QRN), a variant of Recurrent Neural Network (RNN) that effectively handles both short-term and long-term sequential dependencies to reason over multiple facts, is proposed.

One Time of Interaction May Not Be Enough: Go Deep with an Interaction-over-Interaction Network for Response Selection in Dialogues

Evaluation results on three benchmark data sets indicate that IoI can significantly outperform state-of-the-art methods in terms of various matching metrics and unveil how the depth of interaction affects the performance of IoI.
...