Corpus ID: 230437663

Baleen: Robust Multi-Hop Reasoning at Scale via Condensed Retrieval

@article{Khattab2021BaleenRM,
  title={Baleen: Robust Multi-Hop Reasoning at Scale via Condensed Retrieval},
  author={O. Khattab and Christopher Potts and Matei A. Zaharia},
  journal={ArXiv},
  year={2021},
  volume={abs/2101.00436}
}
Multi-hop reasoning (i.e., reasoning across two or more documents) at scale is a key step toward NLP models that can exhibit broad world knowledge by leveraging large collections of documents. We propose Baleen, a system that improves the robustness and scalability of multi-hop reasoning over current approaches. Baleen introduces a per-hop condensed retrieval pipeline to mitigate the size of the search space, a focused late interaction retriever (FliBERT) that can model complex multi-hop… Expand

Figures and Tables from this paper

Adaptive Information Seeking for Open-Domain Question Answering
  • Yunchang Zhu, Liang Pang, Yanyan Lan, Huawei Shen, Xueqi Cheng
  • Computer Science
  • ArXiv
  • 2021
Information seeking is an essential step for open-domain question answering to efficiently gather evidence from a large corpus. Recently, iterative approaches have been proven to be effective forExpand
Joint Passage Ranking for Diverse Multi-Answer Retrieval
TLDR
To model the joint probability of the retrieved passages, JPR makes use of an autoregressive reranker that selects a sequence of passages, equipped with novel training and decoding algorithms. Expand

References

SHOWING 1-10 OF 28 REFERENCES
Answering Complex Open-Domain Questions with Multi-Hop Dense Retrieval
TLDR
This work proposes a simple and efficient multi-hop dense retrieval approach for answering complex open-domain questions, which achieves state-of-the-art performance on twoMulti-hop datasets, HotpotQA and multi-evidence FEVER, and can be applied to any unstructured text corpus. Expand
Learning to Retrieve Reasoning Paths over Wikipedia Graph for Question Answering
TLDR
A new graph-based recurrent retrieval approach that learns to retrieve reasoning paths over the Wikipedia graph to answer multi-hop open-domain questions and achieves significant improvement in HotpotQA, outperforming the previous best model by more than 14 points. Expand
Retrieve, Rerank, Read, then Iterate: Answering Open-Domain Questions of Arbitrary Complexity from Text
TLDR
This work proposes a unified system to answer open-domain questions of arbitrary complexity directly from text that works with off-the-shelf retrieval systems on arbitrary text collections and achieves strong performance on a new unified benchmark. Expand
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
TLDR
A novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods, in which a model learns to seek and combine evidence — effectively performing multihop, alias multi-step, inference. Expand
Understanding Dataset Design Choices for Multi-hop Reasoning
TLDR
This paper investigates two recently proposed datasets, WikiHop and HotpotQA, and explores sentence-factored models for these tasks; by design, these models cannot do multi-hop reasoning, but they are still able to solve a large number of examples in both datasets. Expand
Dense Passage Retrieval for Open-Domain Question Answering
TLDR
This work shows that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. Expand
Constructing A Multi-hop QA Dataset for Comprehensive Evaluation of Reasoning Steps
TLDR
This study presents a new multi-hop QA dataset, called 2WikiMultiHopQA, which uses structured and unstructured data and introduces the evidence information containing a reasoning path forMulti-hop questions, and demonstrates that the dataset is challenging formulti-hop models and it ensures that multi-Hop reasoning is required. Expand
Do Multi-hop Readers Dream of Reasoning Chains?
TLDR
A systematic analysis to assess whether providing the full reasoning chain of multiple passages, instead of just one final passage where the answer appears, could improve the performance of the existing QA models and the necessity to develop models with better reasoning abilities. Expand
Transformer-XH: Multi-Evidence Reasoning with eXtra Hop Attention
TLDR
Transformer-XH is presented, which uses eXtra Hop attention to enable intrinsic modeling of structured texts in a fully data-driven way and leads to a simpler multi-hop QA system which outperforms previous state-of-the-art on the HotpotQA FullWiki setting. Expand
REALM: Retrieval-Augmented Language Model Pre-Training
TLDR
The effectiveness of Retrieval-Augmented Language Model pre-training (REALM) is demonstrated by fine-tuning on the challenging task of Open-domain Question Answering (Open-QA) and is found to outperform all previous methods by a significant margin, while also providing qualitative benefits such as interpretability and modularity. Expand
...
1
2
3
...