Task-Oriented Conversation Generation Using Heterogeneous Memory Networks

@article{Lin2019TaskOrientedCG,
  title={Task-Oriented Conversation Generation Using Heterogeneous Memory Networks},
  author={Zehao Lin and Xinjing Huang and Feng Ji and Haiqing Chen and Ying Zhang},
  journal={ArXiv},
  year={2019},
  volume={abs/1909.11287}
}
How to incorporate external knowledge into a neural dialogue model is critically important for dialogue systems to behave like real humans. To handle this problem, memory networks are usually a great choice and a promising way. However, existing memory networks do not perform well when leveraging heterogeneous information from different sources. In this paper, we propose a novel and versatile external memory networks called Heterogeneous Memory Networks (HMNs), to simultaneously utilize user… 

Figures and Tables from this paper

A Neural Conversation Generation Model via Equivalent Shared Memory Investigation
TLDR
A novel reading and memory framework called Deep Reading Memory Network (DRMN) which is capable of remembering useful information of similar conversations for improving utterance generation is proposed and applied to two large-scale conversation datasets of justice and e-commerce fields.
Task-Oriented Dialog Generation with Enhanced Entity Representation
TLDR
This work proposes a novel enhanced entity representation (EER) to simultaneously obtain context-sensitive and structure-aware entity representation and conducts an Out-of-Vocabulary (OOV) test to demonstrate the superiority of EER in handling common OOV problem.
MultiDM-GCN: Aspect-Guided Response Generation in Multi-Domain Multi-Modal Dialogue System using Graph Convolution Network
TLDR
A multi-modal conversational framework for a task-oriented dialogue setup that generates the responses following the different aspects of a product or service to cater to the user's needs is presented.
Contextualize Knowledge Bases with Transformer for End-to-end Task-Oriented Dialogue Systems
TLDR
This work proposes a COntext-aware Memory Enhanced Transformer framework (COMET), which treats the KB as a sequence and leverages a novel Memory Mask to enforce the entity to only focus on its relevant entities and dialogue history, while avoiding the distraction from the irrelevant entities.
Aspect-Aware Response Generation for Multimodal Dialogue System
TLDR
Quantitative and qualitative analysis on the newly created MDMMD++ dataset shows that the proposed methodology outperforms the baseline models for the proposed task of aspect controlled response generation in a multimodal task-oriented dialog system.
"Wait, I'm Still Talking!" Predicting the Dialogue Interaction Behavior Using Imagine-Then-Arbitrate Model
TLDR
A novel Imagine-then-Arbitrate (ITA) neural dialogue model to help the agent decide whether to wait or to make a response directly to the user directly and performs well on addressing ending prediction issue and outperforms baseline models.
Predict-Then-Decide: A Predictive Approach for Wait or Answer Task in Dialogue Systems
TLDR
A predictive approach named Predict-then-Decide (PTD) to tackle the Wait-or-Answer problem of dialogue systems, which takes advantage of a decision model to help the dialogue system decide whether to wait or answer.
A Neural Question Answering System for Basic Questions about Subroutines
TLDR
This paper designs a context-based QA system for basic questions about subroutines based on rules the authors extract from recent empirical studies, and trains a custom neural QA model with this dataset and evaluates the model in a study with professional programmers.
Should Answer Immediately or Wait for Further Information? A Novel Wait-or-Answer Task and Its Predictive Approach
TLDR
This paper is the first work to explicitly define the Wait-or-Answer task in the dialogue system and propose a predictive approach dubbed Imagine-then-Arbitrate (ITA), which significantly outperforms the existing models in solving this Wait- or-Answer problem.
A Survey of Knowledge-Enhanced Text Generation
TLDR
A comprehensive review of the research on knowledge-enhanced text generation over the past five years is presented, which includes two parts: (i) general methods and architectures for integrating knowledge into text generation; (ii) specific techniques and applications according to different forms of knowledge data.
...
...

References

SHOWING 1-10 OF 34 REFERENCES
Hierarchical Variational Memory Network for Dialogue Generation
TLDR
A novel hierarchical variational memory network (HVMN) is proposed, by adding the hierarchical structure and the variationalMemory network into a neural encoder-decoder network that can capture both the high-level abstract variations and long-term memories during dialogue tracking, which enables the random access of relevant dialogue histories.
Key-Value Retrieval Networks for Task-Oriented Dialogue
TLDR
This work proposes a new neural dialogue agent that is able to effectively sustain grounded, multi-domain discourse through a novel key-value retrieval mechanism and significantly outperforms a competitive rule-based system and other existing neural dialogue architectures on the provided domains according to both automatic and human evaluation metrics.
A Copy-Augmented Sequence-to-Sequence Architecture Gives Good Performance on Task-Oriented Dialogue
TLDR
This model outperforms more complex memory-augmented models by 7% in per-response generation and is on par with the current state-of-the-art on DSTC2, a real-world task-oriented dialogue dataset.
Mem2Seq: Effectively Incorporating Knowledge Bases into End-to-End Task-Oriented Dialog Systems
TLDR
This paper empirically shows how Mem2Seq controls each generation step, and how its multi-hop attention mechanism helps in learning correlations between memories.
End-to-End Reinforcement Learning of Dialogue Agents for Information Access
This paper proposes KB-InfoBot -- a multi-turn dialogue agent which helps users search Knowledge Bases (KBs) without composing complicated queries. Such goal-oriented dialogue agents typically need
End-to-End Task-Completion Neural Dialogue Systems
TLDR
The end-to-end system not only outperforms modularized dialogue system baselines for both objective and subjective evaluation, but also is robust to noises as demonstrated by several systematic experiments with different error granularity and rates specific to the language understanding module.
Natural Answer Generation with Heterogeneous Memory
TLDR
This work proposes a novel attention mechanism to encourage the decoder to actively interact with the memory by taking its heterogeneity into account, and can effectively explore heterogeneous memory to produce readable and meaningful answer sentences while maintaining high coverage for given answer information.
A Network-based End-to-End Trainable Task-oriented Dialogue System
TLDR
This work introduces a neural network-based text-in, text-out end-to-end trainable goal-oriented dialogue system along with a new way of collecting dialogue data based on a novel pipe-lined Wizard-of-Oz framework that can converse with human subjects naturally whilst helping them to accomplish tasks in a restaurant search domain.
A Knowledge-Grounded Neural Conversation Model
TLDR
A novel, fully data-driven, and knowledge-grounded neural conversation model aimed at producing more contentful responses that generalizes the widely-used Sequence-to-Sequence (seq2seq) approach by conditioning responses on both conversation history and external “facts”, allowing the model to be versatile and applicable in an open-domain setting.
Augmenting End-to-End Dialog Systems with Commonsense Knowledge
TLDR
This model represents the first attempt to integrating a large commonsense knowledge base into end-to-end conversational models and suggests that the knowledge-augmented models are superior to their knowledge-free counterparts in automatic evaluation.
...
...