Retrieval-Free Knowledge-Grounded Dialogue Response Generation with Adapters

@inproceedings{Xu2022RetrievalFreeKD,
  title={Retrieval-Free Knowledge-Grounded Dialogue Response Generation with Adapters},
  author={Yan Xu and Etsuko Ishii and Zihan Liu and Genta Indra Winata and Dan Su and Andrea Madotto and Pascale Fung},
  booktitle={DIALDOC},
  year={2022}
}
To diversify and enrich generated dialogue responses, knowledge-grounded dialogue has been investigated in recent years. The existing methods tackle the knowledge grounding challenge by retrieving the relevant sentences over a large corpus and augmenting the dialogues with explicit extra information. Despite their success, however, the existing works have drawbacks on the inference efficiency. This paper proposes KnowExpert, an end-to-end framework to bypass the explicit retrieval process and… 

Figures and Tables from this paper

On Controlling Fallback Responses for Grounded Dialogue Generation
TLDR
A novel framework that automatically generates a control token with the generator to bias the succeeding response towards informativeness for answerable contexts and fallback for unanswerable contexts in an end-to-end manner is proposed.
Unsupervised Knowledge Selection for Dialogue Generation
TLDR
A novel Distilled Distant Supervision Loss (DDSL) is proposed to supervise knowledge selection when the gold knowledge label is unknown and manages to select knowledge more accurately in the unsupervised setting and generates more informative responses, even outperforming many strong supervised baselines.
CAiRE in DialDoc21: Data Augmentation for Information Seeking Dialogue System
TLDR
This work utilizes data augmentation methods and several training techniques with the pre-trained language models to learn a general pattern of the task and thus achieve promising performance in DialDoc21 competition.
Lexical Knowledge Internalization for Neural Dialog Generation
TLDR
This work proposes knowledge internalization (KI), which aims to complement the lexical knowledge into neural dialog models by integrating knowledge about each input token internally into the model’s parameters.
Think Before You Speak: Explicitly Generating Implicit Commonsense Knowledge for Response Generation
TLDR
Think-Before-Speaking is presented, a generative approach to first externalize implicit commonsense knowledge (think) and use this knowledge to generate responses (speak), arguing that externalizing implicit knowledge allows more efficient learning, produces more informative responses, and enables more explainable models.
Think Before You Speak: Using Self-talk to Generate Implicit Commonsense Knowledge for Response Generation
TLDR
This paper presents a self-talk approach that first generates the implicit commonsense knowledge and then generates response by referencing the externalized knowledge, all using one generative model.
Partner Personas Generation for Diverse Dialogue Generation
TLDR
A novel framework that leverages automatic partner personas generation to enhance the succeeding dialogue generation and incorporates reinforcement learning with a dedicatedly designed critic network for reward judgement is offered.
Survey of Hallucination in Natural Language Generation
TLDR
This survey serves tofacilitate collaborative efforts among researchers in tackling the challenge of hallucinated texts in NLG by providing a broad overview of the research progress and challenges in the hallucination problem inNLG.

References

SHOWING 1-10 OF 58 REFERENCES
Zero-Resource Knowledge-Grounded Dialogue Generation
TLDR
This work proposes representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables, and devise a variational approach that can effectively estimate a generation model from a dialogue corpus and a knowledge corpus that are independent with each other.
Low-Resource Knowledge-Grounded Dialogue Generation
TLDR
A disentangled response decoder is devised in order to isolate parameters that depend on knowledge-grounded dialogues from the entire generation model, and Evaluation results on two benchmarks indicate that with only $1/8$ training data, the model can achieve the state-of-the-art performance and generalize well on out- of-domain knowledge.
Sequential Latent Knowledge Selection for Knowledge-Grounded Dialogue
TLDR
The proposed sequential latent variable model can keep track of the prior and posterior distribution over knowledge and can not only reduce the ambiguity caused from the diversity in knowledge selection of conversation but also better leverage the response information for proper choice of knowledge.
Knowledge Enhanced Fine-Tuning for Better Handling Unseen Entities in Dialogue Generation
TLDR
This work introduces two auxiliary training objectives: Interpret Masked Word, which conjectures the meaning of the masked entity given the context; and Hypernym Generation, which predicts the hypernym of the entity based on the context.
Multi-Stage Prompting for Knowledgeable Dialogue Generation
TLDR
This paper proposes a multi-stage prompting approach to generate knowledgeable responses from a single pretrained language model (LM) and shows that its knowledge generator outperforms the state-of-the-art retrieval-based model by 5.8% when combining knowledge relevance and correctness.
Wizard of Wikipedia: Knowledge-Powered Conversational agents
TLDR
The best performing dialogue models are able to conduct knowledgeable discussions on open-domain topics as evaluated by automatic metrics and human evaluations, while a new benchmark allows for measuring further improvements in this important research direction.
Learning Knowledge Bases with Parameters for Task-Oriented Dialogue Systems
TLDR
This paper proposes a method to embed the KB, of any size, directly into the model parameters, which does not require any DST or template responses, nor the KB as input, and it can dynamically update its KB via fine-tuning.
Learning to Select Knowledge for Response Generation in Dialog Systems
TLDR
An end-to-end neural model which employs a novel knowledge selection mechanism where both prior and posterior distributions over knowledge are used to facilitate knowledge selection and can better incorporate appropriate knowledge in response generation.
Knowledge-Grounded Dialogue Generation with Pre-trained Language Models
TLDR
Empirical results indicate that the proposed response generation defined by a pre-trained language model with a knowledge selection module and an unsupervised approach to jointly optimizing knowledge selection and response generation with unlabeled dialogues can significantly outperform state-of-the-art methods in both automatic evaluation and human judgment.
Bridging the Gap between Prior and Posterior Knowledge Selection for Knowledge-Grounded Dialogue Generation
TLDR
A prior selection module is enhanced with the necessary posterior information obtained from the specially designed Posterior Information Prediction Module (PIPM) and a Knowledge Distillation Based Training Strategy (KDBTS) is proposed to train the decoder with the knowledge selected from the prior distribution, removing the exposure bias of knowledge selection.
...
...