DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs
@inproceedings{Dua2019DROPAR, title={DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs}, author={Dheeru Dua and Yizhong Wang and Pradeep Dasigi and Gabriel Stanovsky and Sameer Singh and Matt Gardner}, booktitle={NAACL-HLT}, year={2019} }
Reading comprehension has recently seen rapid progress, with systems matching humans on the most popular datasets for the task. However, a large body of work has highlighted the brittleness of these systems, showing that there is much work left to be done. We introduce a new English reading comprehension benchmark, DROP, which requires Discrete Reasoning Over the content of Paragraphs. In this crowdsourced, adversarially-created, 96k-question benchmark, a system must resolve references in a… CONTINUE READING
Figures, Tables, and Topics from this paper.
Citations
Publications citing this paper.
SHOWING 1-10 OF 13 CITATIONS
A Discrete Hard EM Approach for Weakly Supervised Question Answering
VIEW 10 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED
A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning
VIEW 14 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED
NumNet: Machine Reading Comprehension with Numerical Reasoning
VIEW 21 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED
Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning
VIEW 3 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
VIEW 3 EXCERPTS
CITES METHODS
Are We Modeling the Task or the Annotator? An Investigation of Annotator Bias in Natural Language Understanding Datasets
VIEW 1 EXCERPT
CITES BACKGROUND
Can You Unpack That? Learning to Rewrite Questions-in-Context
VIEW 1 EXCERPT
CITES BACKGROUND
Movie Plot Analysis via Turning Point Identification
VIEW 2 EXCERPTS
CITES BACKGROUND
MultiQA: An Empirical Investigation of Generalization and Transfer in Reading Comprehension
VIEW 1 EXCERPT
CITES BACKGROUND
References
Publications referenced by this paper.
SHOWING 1-10 OF 58 REFERENCES
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
VIEW 9 EXCERPTS
HIGHLY INFLUENTIAL
Bidirectional Attention Flow for Machine Comprehension
VIEW 9 EXCERPTS
HIGHLY INFLUENTIAL
Deep Contextualized Word Representations
VIEW 2 EXCERPTS
QANet: Combining Local Convolution with Global Self-Attention for Reading Comprehension
VIEW 2 EXCERPTS
HIGHLY INFLUENTIAL
ReCoRD: Bridging the Gap between Human and Machine Commonsense Reading Comprehension
VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL
A large annotated corpus for learning natural language inference
VIEW 3 EXCERPTS
HIGHLY INFLUENTIAL
Semantic Parsing on Freebase from Question-Answer Pairs
VIEW 3 EXCERPTS
HIGHLY INFLUENTIAL
Annotation Artifacts in Natural Language Inference Data
VIEW 1 EXCERPT