BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions

@article{Clark2019BoolQET,
  title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions},
  author={Christopher Clark and Kenton Lee and Ming-Wei Chang and T. Kwiatkowski and Michael Collins and Kristina Toutanova},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.10044}
}
In this paper we study yes/no questions that are naturally occurring --- meaning that they are generated in unprompted and unconstrained settings. [...] Key Result It achieves 80.4% accuracy compared to 90% accuracy of human annotators (and 62% majority-baseline), leaving a significant gap for future work.Expand
Transfer Learning on Natural YES/NO Questions
Natural Perturbation for Robust Question Answering
Understanding tables with intermediate pre-training
Training Question Answering Models from Synthetic Data
“I’D Rather Just Go to Bed”: Understanding Indirect Answers
Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension
Evaluating NLP Models via Contrast Sets
When Do You Need Billions of Words of Pretraining Data?
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 42 REFERENCES
QuAC : Question Answering in Context
Know What You Don't Know: Unanswerable Questions for SQuAD
SQuAD: 100, 000+ Questions for Machine Comprehension of Text
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
Annotation Artifacts in Natural Language Inference Data
CoQA: A Conversational Question Answering Challenge
...
1
2
3
4
5
...