Crowdsourcing Question-Answer Meaning Representations

@article{Michael2017CrowdsourcingQM,
  title={Crowdsourcing Question-Answer Meaning Representations},
  author={Julian Michael and Gabriel Stanovsky and Luheng He and I. Dagan and Luke Zettlemoyer},
  journal={ArXiv},
  year={2017},
  volume={abs/1711.05885},
  pages={560-568}
}
  • Julian Michael, Gabriel Stanovsky, +2 authors Luke Zettlemoyer
  • Published 2017
  • Computer Science
  • ArXiv
  • We introduce Question-Answer Meaning Representations (QAMRs), which represent the predicate-argument structure of a sentence as a set of question-answer pairs. We also develop a crowdsourcing scheme to show that QAMRs can be labeled with very little training, and gather a dataset with over 5,000 sentences and 100,000 questions. A detailed qualitative analysis demonstrates that the crowd-generated question-answer pairs cover the vast majority of predicate-argument relationships in existing… CONTINUE READING

    Figures, Tables, and Topics from this paper.

    Explore Further: Topics Discussed in This Paper

    Large-Scale QA-SRL Parsing
    • 32
    • PDF
    Supervised Open Information Extraction
    • 72
    • PDF
    Transforming Question Answering Datasets Into Natural Language Inference Datasets
    • 24
    • Highly Influenced
    • PDF
    Break It Down: A Question Understanding Benchmark
    • 15
    • PDF
    Question Answering is a Format; When is it Useful?
    • 14
    • PDF
    AmbigQA: Answering Ambiguous Open-domain Questions
    • 7
    • PDF
    Why Does a Visual Question Have Different Answers?
    • 10
    • Highly Influenced
    • PDF
    QuASE: Question-Answer Driven Sentence Encoding
    • 4
    • Highly Influenced
    • PDF

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 33 REFERENCES
    SQuAD: 100, 000+ Questions for Machine Comprehension of Text
    • 2,195
    • PDF
    VQA: Visual Question Answering
    • 1,861
    • PDF
    Bidirectional Attention Flow for Machine Comprehension
    • 1,158
    • PDF
    MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text
    • 456
    • PDF
    The Proposition Bank: An Annotated Corpus of Semantic Roles
    • 2,202
    • Highly Influential
    • PDF
    Neural Question Generation from Text: A Preliminary Study
    • 134
    • PDF
    MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
    • 534
    • PDF
    Building a Large Annotated Corpus of English: The Penn Treebank
    • 7,412
    • Highly Influential
    • PDF
    The Berkeley FrameNet Project
    • 2,580
    • Highly Influential
    • PDF