Crowdsourcing Multiple Choice Science Questions

@inproceedings{Welbl2017CrowdsourcingMC,
  title={Crowdsourcing Multiple Choice Science Questions},
  author={Johannes Welbl and Nelson F. Liu and Matt Gardner},
  booktitle={NUT@EMNLP},
  year={2017}
}
We present a novel method for obtaining high-quality, domain-targeted multiple choice questions from crowd workers. Generating these questions can be difficult without trading away originality, relevance or diversity in the answer options. Our method addresses these problems by leveraging a large corpus of domain-specific text and a small set of existing questions. It produces model suggestions for document selection and answer distractor choice which aid the human question generation process… CONTINUE READING

Figures, Tables, and Topics from this paper.

Explore Further: Topics Discussed in This Paper

Citations

Publications citing this paper.
SHOWING 1-10 OF 26 CITATIONS

Distractor Generation for Multiple Choice Questions Using Learning to Rank

VIEW 5 EXCERPTS
CITES METHODS
HIGHLY INFLUENCED

Ranking Distractors for Multiple Choice Questions

VIEW 3 EXCERPTS
CITES METHODS
HIGHLY INFLUENCED

An Empirical Evaluation on Word Embeddings Across Reading Comprehension

VIEW 7 EXCERPTS
CITES BACKGROUND
HIGHLY INFLUENCED

A Systematic Review of Automatic Question Generation for Educational Purposes

VIEW 1 EXCERPT
CITES BACKGROUND

FriendsQA: Open-Domain Question Answering on TV Show Transcripts

VIEW 1 EXCERPT
CITES BACKGROUND

Improving Question Answering with External Knowledge

VIEW 2 EXCERPTS
CITES BACKGROUND

References

Publications referenced by this paper.
SHOWING 1-10 OF 37 REFERENCES

Computeraided generation of multiple-choice tests

  • Ruslan Mitkov, Le An Ha.
  • Proceedings of the HLT-NAACL 03 Workshop on Building Educational Applications Using Natural Language Processing - Volume 2. Associa-
  • 2003
VIEW 5 EXCERPTS
HIGHLY INFLUENTIAL

Teaching Machines to Read and Comprehend

VIEW 4 EXCERPTS
HIGHLY INFLUENTIAL

2016)) and use the hyperparameters reported in the original paper (Kadlec et al., 2016) for the rest. For the GA Reader, we use three gated-attention layers with the multiplicative gat

  • Onishi
  • 2016