• Corpus ID: 85496713

Knowledge Aware Conversation Generation with Reasoning on Augmented Graph

@article{Liu2019KnowledgeAC,
  title={Knowledge Aware Conversation Generation with Reasoning on Augmented Graph},
  author={Zhibin Liu and Zheng-Yu Niu and Hua Wu and Haifeng Wang},
  journal={ArXiv},
  year={2019},
  volume={abs/1903.10245}
}
Two types of knowledge, factoid knowledge from graphs and non-factoid knowledge from unstructured documents, have been studied for knowledge aware open-domain conversation generation, in which edge information in graphs can help generalization of knowledge selectors, and text sentences of non-factoid knowledge can provide rich information for response generation. [] Key Method To fully leverage long text information that differentiates our graph from others, we improve a state of the art reasoning algorithm…

Figures and Tables from this paper

DukeNet: A Dual Knowledge Interaction Network for Knowledge-Grounded Conversation

Experimental results on two public KGC benchmarks show that Duke net significantly outperforms state-of-the-art methods in terms of both automatic and human evaluations, indicating that DukeNet enhanced by DukeL can select more appropriate knowledge and hence generate more informative and engaging responses.

Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue

The proposed sequential latent variable model can keep track of the prior and posterior distribution over knowledge and can not only reduce the ambiguity caused from the diversity in knowledge selection of conversation but also better leverage the response information for proper choice of knowledge.

Adapting to Context-Aware Knowledge in Natural Conversation for Multi-Turn Response Selection

A model called MNDB is proposed to model natural dialog behaviors for multi-turn response selection and can significantly outperform state-of-the-art models, and a ternary-grounding network is designed to mimic user behaviors of incorporating knowledge in natural conversations.

Initiative-Aware Self-Supervised Learning for Knowledge-Grounded Conversations

Experimental results on two datasets show that MIKe significantly outperforms state-of-the-art methods in terms of both automatic and human evaluations, indicating that it can select more appropriate knowledge and generate more informative and engaging responses.

Knowledge-Augmented Methods for Natural Language Processing

This tutorial introduces the key steps in integrating knowledge into NLP, including knowledge grounding from text, knowledge representation and fusing, and introduces recent state-of-the-art applications in fusing knowledge into language understanding, language generation and commonsense reasoning.

A Survey of Knowledge-enhanced Text Generation

A comprehensive review of the research on knowledge-enhanced text generation over the past five years is presented, which includes two parts: (i) general methods and architectures for integrating knowledge into text generation; (ii) specific techniques and applications according to different forms of knowledge data.

A Compare Aggregate Transformer for Understanding Document-grounded Dialogue

A Compare Aggregate Transformer (CAT) is proposed to jointly denoise the dialogue context and aggregate the document information for response generation and two metrics for evaluating document utilization efficiency based on word overlap are proposed.

A Survey of Document Grounded Dialogue Systems (DGDS)

The classification, architecture, datasets, models, and future development trends of the DGDS are analyzed, hoping to help researchers in this field.

RefNet: A Reference-aware Network for Background Based Conversation

Experimental results show that Ref net significantly outperforms state-of-the-art methods in terms of both automatic and human evaluations, indicating that RefNet can generate more appropriate and human-like responses.

Challenges in Building Intelligent Open-domain Dialog Systems

This article reviews the recent work on neural approaches that are devoted to addressing three challenges in developing intelligent open-domain dialog systems: semantics, consistency, and interactiveness.

References

SHOWING 1-10 OF 35 REFERENCES

Commonsense Knowledge Aware Conversation Generation with Graph Attention

This is the first attempt that uses large-scale commonsense knowledge in conversation generation, and unlike existing models that use knowledge triples (entities) separately and independently, this model treats each knowledge graph as a whole, which encodes more structured, connected semantic information in the graphs.

Flexible End-to-End Dialogue System for Knowledge Grounded Conversation

A dynamic knowledge enquirer which selects different answer entities at different positions in a single response, according to different local context is designed, enabling the model to deal with out-of-vocabulary entities.

Question Answering on Knowledge Bases and Text using Universal Schema and Memory Networks

Evaluation results on Spades fill-in-the-blank question answering dataset show that exploiting universal schema for question answering is better than using either a KB or text alone, and this model outperforms the current state-of- the-art by 8.5 F1 points.

Incorporating Loose-Structured Knowledge into LSTM with Recall Gate for Conversation Modeling

The loose structured domain knowledge base is introduced, which can be built with slight amount of manual work and easily adopted by the Recall gate, so as to enhance LSTM by cooperating with its local memory to capture the implicit semantic relevance between sentences within conversations.

Wizard of Wikipedia: Knowledge-Powered Conversational agents

The best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while a new benchmark allows for measuring further improvements in this important research direction.

Open domain question answering using Wikipedia-based knowledge model

Chains of Reasoning over Entities, Relations, and Text using Recurrent Neural Networks

This paper learns to jointly reason about relations, entities, and entity-types, and uses neural attention modeling to incorporate multiple paths in a single RNN that represents logical composition across all relations.

Dialog Generation Using Multi-Turn Reasoning Neural Networks

In this paper, we propose a generalizable dialog generation approach that adapts multi-turn reasoning, one recent advancement in the field of document comprehension, to generate responses (“answers”)

Knowledge Diffusion for Neural Dialogue Generation

A neural knowledge diffusion model to introduce knowledge into dialogue generation that can not only match the relevant facts for the input utterance but diffuse them to similar entities with the help of facts matching and entity diffusion.

Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text

A novel model is proposed, GRAFT-Net, for extracting answers from a question-specific subgraph containing text and Knowledge Bases entities and relations that is competitive with the state-of-the-art when tested using either KBs or text alone, and vastly outperforms existing methods in the combined setting.