Learning to Express in Knowledge-Grounded Conversation

@article{Zhao2022LearningTE,
  title={Learning to Express in Knowledge-Grounded Conversation},
  author={Xueliang Zhao and Tingchen Fu and Chongyang Tao and Wei Wu and Dongyan Zhao and Rui Yan},
  journal={ArXiv},
  year={2022},
  volume={abs/2204.05805}
}
Grounding dialogue generation by extra knowledge has shown great potentials towards building a system capable of replying with knowledgeable and engaging responses. Existing studies focus on how to synthesize a response with proper knowledge, yet neglect that the same knowledge could be expressed differently by speakers even under the same context. In this work, we mainly consider two aspects of knowledge expression, namely the structure of the response and style of the content in each part. We… 

There Are a Thousand Hamlets in a Thousand People’s Eyes: Enhancing Knowledge-grounded Dialogue with Personal Memory

This work proposes a variational method to model the underlying relationship between one’s personal memory and his or her selection of knowledge, and devise a learning scheme in which the forward mapping from personal memory to knowledge and its inverse mapping is included in a closed loop so that they could teach each other.

References

SHOWING 1-10 OF 58 REFERENCES

Zero-Resource Knowledge-Grounded Dialogue Generation

This work proposes representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables, and devise a variational approach that can effectively estimate a generation model from a dialogue corpus and a knowledge corpus that are independent with each other.

Wizard of Wikipedia: Knowledge-Powered Conversational agents

The best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while a new benchmark allows for measuring further improvements in this important research direction.

DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation

It is shown that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems.

ReCoSa: Detecting the Relevant Contexts with Self-Attention for Multi-turn Dialogue Generation

Experimental results on both Chinese customer services dataset and English Ubuntu dialogue dataset show that ReCoSa significantly outperforms baseline models, in terms of both metric-based and human evaluations.

A Dataset for Document Grounded Conversations

This paper describes two neural architectures that provide benchmark performance on the task of generating the next response and finds that the information from the document helps in generating more engaging and fluent responses.

Commonsense Knowledge Aware Conversation Generation with Graph Attention

This is the first attempt that uses large-scale commonsense knowledge in conversation generation, and unlike existing models that use knowledge triples (entities) separately and independently, this model treats each knowledge graph as a whole, which encodes more structured, connected semantic information in the graphs.

Emotional Chatting Machine: Emotional Conversation Generation with Internal and External Memory

This paper proposes Emotional Chatting Machine (ECM), the first work that addresses the emotion factor in large-scale conversation generation using three new mechanisms that respectively models the high-level abstraction of emotion expressions by embedding emotion categories.

The Stanford CoreNLP Natural Language Processing Toolkit

The design and use of the Stanford CoreNLP toolkit is described, an extensible pipeline that provides core natural language analysis, and it is suggested that this follows from a simple, approachable design, straightforward interfaces, the inclusion of robust and good quality analysis components, and not requiring use of a large amount of associated baggage.

Knowledge-Grounded Dialogue Generation with Pre-trained Language Models

Empirical results indicate that the proposed response generation defined by a pre-trained language model with a knowledge selection module and an unsupervised approach to jointly optimizing knowledge selection and response generation with unlabeled dialogues can significantly outperform state-of-the-art methods in both automatic evaluation and human judgment.

There Are a Thousand Hamlets in a Thousand People’s Eyes: Enhancing Knowledge-grounded Dialogue with Personal Memory

This work proposes a variational method to model the underlying relationship between one’s personal memory and his or her selection of knowledge, and devise a learning scheme in which the forward mapping from personal memory to knowledge and its inverse mapping is included in a closed loop so that they could teach each other.
...