• Publications
  • Influence
Dense Passage Retrieval for Open-Domain Question Answering
TLDR
This work shows that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework.
UnifiedQA: Crossing Format Boundaries With a Single QA System
TLDR
This work uses the latest advances in language modeling to build a single pre-trained QA model, UNIFIEDQA, that performs well across 19 QA datasets spanning 4 diverse formats, and results in a new state of the art on 10 factoid and commonsense question answering datasets.
Compositional Questions Do Not Necessitate Multi-hop Reasoning
TLDR
This work introduces a single-hop BERT-based RC model that achieves 67 F1—comparable to state-of-the-art multi-hop models and designs an evaluation setting where humans are not shown all of the necessary paragraphs for the intendedmulti-hop reasoning but can still answer over 80% of questions.
Query-Reduction Networks for Question Answering
TLDR
Query-Reduction Network (QRN), a variant of Recurrent Neural Network (RNN) that effectively handles both short-term and long-term sequential dependencies to reason over multiple facts, is proposed.
Multi-hop Reading Comprehension through Question Decomposition and Rescoring
TLDR
A system that decomposes a compositional question into simpler sub-questions that can be answered by off-the-shelf single-hop RC models is proposed and a new global rescoring approach is introduced that considers each decomposition to select the best final answer, greatly improving overall performance.
A Discrete Hard EM Approach for Weakly Supervised Question Answering
TLDR
This paper develops a hard EM learning scheme that computes gradients relative to the most likely solution at each update and significantly outperforms previous methods on six QA tasks, including absolute gains of 2–10%, and achieves the state-of-the-art on five of them.
Efficient and Robust Question Answering from Minimal Context over Documents
TLDR
A simple sentence selector is proposed to select the minimal set of sentences to feed into the QA model, and the overall system achieves significant reductions in training and inference times, with accuracy comparable to or better than the state-of-the-art on SQuAD, NewsQA, TriviaQA and SQuad-Open.
AmbigQA: Answering Ambiguous Open-domain Questions
TLDR
This paper introduces AmbigQA, a new open-domain question answering task which involves predicting a set of question-answer pairs, where every plausible answer is paired with a disambiguated rewrite of the original question.
Question Answering through Transfer Learning from Large Fine-grained Supervision Data
TLDR
It is shown that the task of question answering (QA) can significantly benefit from the transfer learning of models trained on a different large, fine-grained QA dataset and that finer supervision provides better guidance for learning lexical and syntactic information than coarser supervision.
Neural Speed Reading via Skim-RNN
TLDR
Skim-RNN, a recurrent neural network that dynamically decides to update only a small fraction of the hidden state for relatively unimportant input tokens, gives computational advantage over an RNN that always updates the entire hidden state.
...
1
2
3
...