Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning

@article{Dasigi2019QuorefAR,
  title={Quoref: A Reading Comprehension Dataset with Questions Requiring Coreferential Reasoning},
  author={Pradeep Dasigi and Nelson F. Liu and Ana Marasovic and Noah A. Smith and Matt Gardner},
  journal={ArXiv},
  year={2019},
  volume={abs/1908.05803}
}
Machine comprehension of texts longer than a single sentence often requires coreference resolution. However, most current reading comprehension benchmarks do not contain complex coreferential phenomena and hence fail to evaluate the ability of models to resolve coreference. We present a new crowdsourced dataset containing more than 24K span-selection questions that require resolving coreference among entities in over 4.7K English paragraphs from Wikipedia. Obtaining questions focused on such… CONTINUE READING

References

Publications referenced by this paper.
SHOWING 1-10 OF 20 REFERENCES

Natural Questions: A Benchmark for Question Answering Research

  • Transactions of the Association for Computational Linguistics
  • 2019

2018) requires a probability distribution over words in the paragraph, and we take each word’s BERT representation to be the vector associated with its first wordpiece

ner
  • QANet Durining training,
  • 2018