Corpus ID: 221005781

Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets

@article{Lewis2020QuestionAA,
  title={Question and Answer Test-Train Overlap in Open-Domain Question Answering Datasets},
  author={Patrick Lewis and Pontus Stenetorp and S. Riedel},
  journal={ArXiv},
  year={2020},
  volume={abs/2008.02637}
}
  • Patrick Lewis, Pontus Stenetorp, S. Riedel
  • Published 2020
  • Computer Science
  • ArXiv
  • Ideally Open-Domain Question Answering models should exhibit a number of competencies, ranging from simply memorizing questions seen at training time, to answering novel question formulations with answers seen during training, to generalizing to completely novel questions with novel answers. However, single aggregated test set scores do not show the full picture of what capabilities models truly have. In this work, we perform a detailed study of the test sets of three popular open-domain… CONTINUE READING

    Tables and Topics from this paper.

    Explore Further: Topics Discussed in This Paper

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 28 REFERENCES
    SQuAD: 100, 000+ Questions for Machine Comprehension of Text
    • 2,154
    • PDF
    Teaching Machines to Read and Comprehend
    • 1,643
    • PDF
    A Coefficient of Agreement for Nominal Scales
    • 17,128
    • Highly Influential
    • PDF
    Semantic Parsing on Freebase from Question-Answer Pairs
    • 951
    • PDF
    A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task
    • 405
    • PDF
    Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
    • 489
    • PDF
    How Much Reading Does Reading Comprehension Require? A Critical Investigation of Popular Benchmarks
    • 98
    • PDF
    DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
    • 331
    • PDF