Generate rather than Retrieve: Large Language Models are Strong Context Generators
@article{Yu2022GenerateRT, title={Generate rather than Retrieve: Large Language Models are Strong Context Generators}, author={W. Yu and Dan Iter and Shuohang Wang and Yichong Xu and Mingxuan Ju and Soumya Sanyal and Chenguang Zhu and Michael Zeng and Meng Jiang}, journal={ArXiv}, year={2022}, volume={abs/2209.10063} }
used under a zero-shot setting, or a small one FiD & Grave, 2021)) fine-tuned with generated documents on the training split of the target dataset. We evaluate our proposed method on three different knowledge-intensive tasks and demonstrate its effectiveness on both zero-shot and supervised settings.
Figures and Tables from this paper
One Citation
Pre-computed memory or on-the-fly encoding? A hybrid approach to retrieval augmentation makes the most of your compute
- Computer ScienceArXiv
- 2023
It is shown that LUMEN outperforms pure memory on multiple question-answering tasks while being much cheaper than FiD, and outperforms both for any given compute budget, and the advantage of LUMen over FiD increases with model size.
References
SHOWING 1-10 OF 62 REFERENCES
Dense Passage Retrieval for Open-Domain Question Answering
- Computer ScienceEMNLP
- 2020
This work shows that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework.
OPT: Open Pre-trained Transformer Language Models
- Computer ScienceArXiv
- 2022
This work presents Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which they aim to fully and responsibly share with interested researchers.
Autoregressive Search Engines: Generating Substrings as Document Identifiers
- Computer ScienceArXiv
- 2022
This work proposes an alternative that doesn’t force any structure in the search space: using all ngrams in a passage as its possible identifier, which not only outperforms prior autoregressive approaches but also leads to an average improvement over more established retrieval solutions for passage-level retrieval on the KILT benchmark.
Training language models to follow instructions with human feedback
- Computer ScienceArXiv
- 2022
The results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent and showing improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets.
Fool Me Twice: Entailment from Wikipedia Gamification
- Computer ScienceNAACL
- 2021
FoolMeTwice (FM2 for short), a large dataset of challenging entailment pairs collected through a fun multi-player game that leads to diverse strategies for crafting claims, such as temporal inference and diverting to unrelated evidence, results in higher quality data for the entailment and evidence retrieval tasks.
Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering
- Computer ScienceEACL
- 2021
Interestingly, it is observed that the performance of this method significantly improves when increasing the number of retrieved passages, evidence that sequence-to-sequence models offers a flexible framework to efficiently aggregate and combine evidence from multiple passages.
Language Models are Few-Shot Learners
- Computer ScienceNeurIPS
- 2020
GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic.
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
- Computer ScienceJ. Mach. Learn. Res.
- 2020
This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.
Reading Wikipedia to Answer Open-Domain Questions
- Computer ScienceACL
- 2017
This approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs, indicating that both modules are highly competitive with respect to existing counterparts.
Few-shot Learning with Retrieval Augmented Language Models
- Computer ScienceArXiv
- 2022
Atlas is presented, a carefully designed and pre-trained retrieval augmented language model able to learn knowledge intensive tasks with very few training examples, and the impact of the content of the document index is studied, showing that it can easily be updated.