Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks

@inproceedings{Asai2022EvidentialityguidedGF,
  title={Evidentiality-guided Generation for Knowledge-Intensive NLP Tasks},
  author={Akari Asai and Matt Gardner and Hannaneh Hajishirzi},
  booktitle={NAACL},
  year={2022}
}
Retrieval-augmented generation models have shown state-of-the-art performance across many knowledge-intensive NLP tasks such as open-domain question answering and fact verification. These models are trained to generate a final output given retrieved passages that can be irrelevant to an input query, leading to learning spurious cues or memorization. This work introduces a method to incorporate evidentiality of passages—whether a passage contains correct evidence to support the output—into… 

Multi-Task Retrieval-Augmented Text Generation with Relevance Sampling

TLDR
A simple yet effective approach to clean the training set by utilizing a distinct property of knowledge-intensive generation: The connection of query-answer pairs to items in the knowledge base, which scales well with increased model capacity and achieves state-of-the-art results in seven KILT tasks.

MIA 2022 Shared Task: Evaluating Cross-lingual Open-Retrieval Question Answering for 16 Diverse Languages

TLDR
The results of the Workshop on Multilingual Information Access 2022 Shared Task, evaluating cross-lingual open-retrieval question answering (QA) systems in 16 typologically diverse languages are presented, with the best system obtains particularly significant improvements in Tamil.

FiD-Light: Efficient and Effective Retrieval-Augmented Text Generation

Retrieval-augmented generation models offer many benefits over standalone language models: besides a textual answer to a given query they provide provenance items retrieved from an updateable

References

SHOWING 1-10 OF 66 REFERENCES

Hurdles to Progress in Long-form Question Answering

TLDR
The task formulation raises fundamental challenges regarding evaluation and dataset creation that currently preclude meaningful modeling progress, and a new system that relies on sparse attention and contrastive retriever learning to achieve state-of-the-art performance on the ELI5 LFQA dataset is designed.

Hindsight: Posterior-guided training of retrievers for improved open-ended generation

TLDR
This work model the guide retriever after the posterior distribution Q of passages given the input and the target output and train it jointly with the standard retriever and the generator by maximizing the evidence lower bound (ELBo) in expectation over Q.

Attention-guided Generative Models for Extractive Question Answering

TLDR
A simple strategy to obtain an extractive answer span from the generative model by leveraging the decoder cross-attention patterns, which allows for hallucination-free inference while conferring significant improvements to the model’s ability to rerank relevant passages.

Can NLI Models Verify QA Systems' Predictions?

TLDR
Careful manual analysis over the predictions of the NLI model shows that it can further identify cases where the QA model produces the right answer for the wrong reason, i.e., when the answer sentence does not address all aspects of the question.

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

TLDR
This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.

Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering

TLDR
Interestingly, it is observed that the performance of this method significantly improves when increasing the number of retrieved passages, evidence that sequence-to-sequence models offers a flexible framework to efficiently aggregate and combine evidence from multiple passages.

Multi-Task Retrieval for Knowledge-Intensive Tasks

TLDR
This work proposes a multi-task trained model for neural retrieval that not only outperforms previous methods in the few-shot setting, but also rivals specialised neural retrievers, even when in-domain training data is abundant.

Distilling Knowledge from Reader to Retriever for Question Answering

TLDR
This paper proposes a technique to learn retriever models for downstream tasks, inspired by knowledge distillation, and which does not require annotated pairs of query and documents.

A question-entailment approach to question answering

TLDR
A novel QA approach based on Recognizing Question Entailment (RQE), which exceeds the best results of the medical task with a 29.8% increase over the best official score, and highlights the effectiveness of combining IR and RQE for future QA efforts.

A Discrete Hard EM Approach for Weakly Supervised Question Answering

TLDR
This paper develops a hard EM learning scheme that computes gradients relative to the most likely solution at each update and significantly outperforms previous methods on six QA tasks, including absolute gains of 2–10%, and achieves the state-of-the-art on five of them.
...