Simple and Effective Multi-Paragraph Reading Comprehension

@article{Clark2018SimpleAE,
  title={Simple and Effective Multi-Paragraph Reading Comprehension},
  author={Christopher Clark and Matt Gardner},
  journal={ArXiv},
  year={2018},
  volume={abs/1710.10723}
}
We consider the problem of adapting neural paragraph-level question answering models to the case where entire documents are given as input. [...] Key Method We sample multiple paragraphs from the documents during training, and use a shared-normalization training objective that encourages the model to produce globally correct output. We combine this method with a state-of-the-art pipeline for training models on document QA data. Experiments demonstrate strong performance on several document QA datasets. Overall…Expand
A Deep Cascade Model for Multi-Document Reading Comprehension
TLDR
A novel deep cascade learning model is developed, which progressively evolves from the document-level and paragraph-level ranking of candidate texts to more precise answer extraction with machine reading comprehension. Expand
Probabilistic Assumptions Matter: Improved Models for Distantly-Supervised Document-Level Question Answering
TLDR
This work addresses the problem of extractive question answering using document-level distant super-vision, pairing questions and relevant documents with answer strings and demonstrates that a multi-objective model can efficiently combine the advantages of multiple assumptions and outperform the best individual formulation. Expand
Multi-hop Reading Comprehension through Question Decomposition and Rescoring
TLDR
A system that decomposes a compositional question into simpler sub-questions that can be answered by off-the-shelf single-hop RC models is proposed and a new global rescoring approach is introduced that considers each decomposition to select the best final answer, greatly improving overall performance. Expand
Cut to the Chase: A Context Zoom-in Network for Reading Comprehension
TLDR
A novel neural-based architecture is presented that is capable of extracting relevant regions based on a given question-document pair and generating a well-formed answer on Reading Comprehension tasks. Expand
Recurrent Chunking Mechanisms for Long-Text Machine Reading Comprehension
TLDR
Experiments on three MRC tasks demonstrate the effectiveness of the proposed recurrent chunking mechanisms: they can obtain segments that are more likely to contain complete answers and at the same time provide sufficient contexts around the ground truth answers for better predictions. Expand
Multi-style Generative Reading Comprehension
This study tackles generative reading comprehension (RC), which consists of answering questions based on textual evidence and natural language generation (NLG). We propose a multi-style abstractiveExpand
Simple and Effective Semi-Supervised Question Answering
TLDR
This work envisions a system where the end user specifies a set of base documents and only a few labelled examples, and exploits the document structure to create cloze-style questions from these base documents; pre-trains a powerful neural network on the cloze style questions; and further fine-tunes the model on the labeled examples. Expand
Microsoft Word-TKDE.docx
Machine reading comprehension is a challenging task and hot topic in natural language processing. Its goal is to develop systems to answer the questions regarding a given context. In this paper, weExpand
A Survey on Machine Reading Comprehension Systems
TLDR
It is demonstrated that the focus of research has changed in recent years from answer extraction to answer generation, from single to multi-document reading comprehension, and from learning from scratch to using pre-trained embeddings. Expand
ReadTwice: Reading Very Large Documents with Memories
TLDR
It is shown that the ReadTwice method outperforms models of comparable size on several question answering (QA) datasets and sets a new state of the art on the challenging NarrativeQA task, with questions about entire books. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 36 REFERENCES
Text Understanding with the Attention Sum Reader Network
TLDR
A new, simple model is presented that uses attention to directly pick the answer from the context as opposed to computing the answer using a blended representation of words in the document as is usual in similar models, making the model particularly suitable for question-answering problems where the answer is a single word from the document. Expand
Reading Wikipedia to Answer Open-Domain Questions
TLDR
This approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs, indicating that both modules are highly competitive with respect to existing counterparts. Expand
Bidirectional Attention Flow for Machine Comprehension
TLDR
The BIDAF network is introduced, a multi-stage hierarchical process that represents the context at different levels of granularity and uses bi-directional attention flow mechanism to obtain a query-aware context representation without early summarization. Expand
SQuAD: 100,000+ Questions for Machine Comprehension of Text
TLDR
A strong logistic regression model is built, which achieves an F1 score of 51.0%, a significant improvement over a simple baseline (20%). Expand
Question Answering through Transfer Learning from Large Fine-grained Supervision Data
TLDR
It is shown that the task of question answering (QA) can significantly benefit from the transfer learning of models trained on a different large, fine-grained QA dataset and that finer supervision provides better guidance for learning lexical and syntactic information than coarser supervision. Expand
Multi-Mention Learning for Reading Comprehension with Neural Cascades
TLDR
This work takes a different approach by constructing lightweight models that are combined in a cascade to find the answer, each submodel consists only of feed-forward networks equipped with an attention mechanism, making it trivially parallelizable. Expand
S-Net: From Answer Extraction to Answer Generation for Machine Reading Comprehension
TLDR
The answer extraction model is first employed to predict the most important sub-spans from the passage as evidence, and the answer synthesis model takes the evidence as additional features along with the question and passage to further elaborate the final answers. Expand
R3: Reinforced Reader-Ranker for Open-Domain Question Answering
TLDR
A new pipeline for open-domain QA with a Ranker component, which learns to rank retrieved passages in terms of likelihood of generating the ground-truth answer to a given question, and a novel method that jointly trains the Ranker along with an answer-generation Reader model, based on reinforcement learning. Expand
TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
TLDR
It is shown that, in comparison to other recently introduced large-scale datasets, TriviaQA has relatively complex, compositional questions, has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and requires more cross sentence reasoning to find answers. Expand
Mnemonic Reader: Machine Comprehension with Iterative Aligning and Multi-hop Answer Pointing
TLDR
Mnemonic Reader for MC tasks is introduced, an end-to-end neural network which aims to tackle the above problem in two aspects: an iterative aligning mechanism which not only captures interactions between the context and the query but also models interactions among the context itself to obtain a fully-aware context representation. Expand
...
1
2
3
4
...