• Corpus ID: 2100831

# MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text

@inproceedings{Richardson2013MCTestAC,
title={MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text},
author={Matthew Richardson and Christopher J. C. Burges and Erin Renshaw},
booktitle={EMNLP},
year={2013}
}
• Published in EMNLP 1 October 2013
• Computer Science
We present MCTest, a freely available set of stories and associated questions intended for research on the machine comprehension of text. [...] Key Method One common method for evaluating someone’s understanding of text is by giving them a multiple-choice reading comprehension test. This has the advantage that it is objectively gradable (vs. essays) yet may test a range of abilities such as causal or counterfactual reasoning, inference among relations, or just basic understanding of the world in which the…Expand
560 Citations
Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences
• Computer Science
NAACL
• 2018
The dataset is the first to study multi-sentence inference at scale, with an open-ended set of question types that requires reasoning skills, and finds human solvers to achieve an F1-score of 88.1%.
Learning Answer-Entailing Structures for Machine Comprehension
• Computer Science
ACL
• 2015
A unified max-margin framework is presented that learns to find hidden structures that explain the relation between the question, correct answer, and text, and is extended to incorporate multi-task learning on the different subtasks that are required to perform machine comprehension.
A new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts are presented, designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience.
Probing Prior Knowledge Needed in Challenging Chinese Machine Reading Comprehension
• Computer Science
ArXiv
• 2019
Experimental results demonstrate that linguistic and general world knowledge may help improve the performance of the baseline reader in both general and domain-specific tasks.
Cosmos QA: Machine Reading Comprehension with Contextual Commonsense Reasoning
• Computer Science
EMNLP
• 2019
This paper introduces Cosmos QA, a large-scale dataset of 35,600 problems that require commonsense-based reading comprehension, formulated as multiple-choice questions, and proposes a new architecture that improves over the competitive baselines.
MCScript: A Novel Dataset for Assessing Machine Comprehension Using Script Knowledge
• Computer Science
LREC
• 2018
A large dataset of narrative texts and questions about these texts, intended to be used in a machine comprehension task that requires reasoning using commonsense knowledge, and shows that the mode of data collection via crowdsourcing results in a substantial amount of inference questions.
Investigating Prior Knowledge for Challenging Chinese Machine Reading Comprehension
• Computer Science
TACL
• 2020
This paper presents the first free-form multiple-Choice Chinese machine reading Comprehension dataset (C3), containing 13,369 documents and their associated 19,577 multiple-choicefree-form questions collected from Chinese-as-a-second-language examinations, and presents a comprehensive analysis of the prior knowledge needed for these real-world problems.
A Parallel-Hierarchical Model for Machine Comprehension on Sparse Data
• Computer Science
ACL
• 2016
The Parallel-Hierarchical model sets a new state of the art for {\it MCTest}, outperforming previous feature-engineered approaches slightly and previous neural approaches by a significant margin (over 15\% absolute).
On Making Reading Comprehension More Comprehensive
• Computer Science
EMNLP
• 2019
This work justifies a question answering approach to reading comprehension and describes the various kinds of questions one might use to more fully test a system’s comprehension of a passage, moving beyond questions that only probe local predicate-argument structures.
Multi-source Meta Transfer for Low Resource Multiple-Choice Question Answering
• Computer Science
ACL
• 2020
In this framework, a multi-source meta transfer (MMT) is proposed for low-resource MCQA by incorporating multiple training sources to learn a generalized feature representation across domains and introduces the meta transfer that can be integrated into the multi- source meta training.

## References

SHOWING 1-10 OF 29 REFERENCES
• Computer Science
Natural Language Engineering
• 2003
This paper discusses an approach that successfully enhanced an existing IS system with RC capabilities, which constitutes a possible foundation for more advanced forms of dialogue-based Q/A.
Reading comprehension tests for computer-based understanding evaluation
• Computer Science
Natural Language Engineering
• 2005
A methodology for evaluation of the application of modern natural language technologies to the task of responding to RC tests is presented, based on ABCs (Abduction Based Comprehension system), an automated system for taking tests requiring short answer phrases as responses.
An Entailment-Based Approach to the QA4MRE Challenge
• Computer Science
CLEF
• 2012
This paper describes the entry to the 2012 QA4MRE Main Task, and estimates the likelihood of textual entailment between sentences in the text, and the question Q and each candidate answer Ai and finds sets of sentences SQ, SA that each plausibly entail Q or one of the Ai respectively.
• Computer Science
ACL
• 1999
Initial work on Deep Read, an automated reading comprehension system that accepts arbitrary text input (a story) and answers questions about it is described, with a baseline system that retrieves the sentence containing the answer 30--40% of the time.
A Challenge Set for Advancing Language Modeling
• Computer Science
WLM@NAACL-HLT
• 2012
A new, publicly available corpus intended to stimulate research into language modeling techniques which are sensitive to overall sentence coherence is described, and a large set of Nineteenth and early Twentieth Century texts are provided as training material.
Automatic Multi-Layer Corpus Annotation for Evaluation Question Answering Methods: CBC4Kids
• Computer Science
LINC@EACL
• 2003
This work has enriched the MITRE CBC4Kids corpus with multiple XML annotation layers recording the output of various tokenizers, lemmatizers, a stemmer, a semantic tagger, POS taggers and syntactic parsers, and built a baseline NLQA system for wordoverlap based answer retrieval.
Looking Under the Hood : Tools for Diagnosing your Question Answering Engine
It is shown that Q/A systems perform better when there are multiple answer opportunities per question and the limitations of both term overlap and answer typing are quantified to distinguish between competing answer candidates.
Learning strategies for story comprehension: a reinforcement learning approach
• Computer Science
ICML
• 2005
A model for improving story comprehension through inductive generalization and reinforcement learning, based on classified examples, is presented, demonstrating that a learning-based approach can improve upon "matching and extraction"-only techniques.
Factoid Question Answering over Unstructured and Structured Web Content
• Computer Science
TREC
• 2005
Two new, builtfrom-scratch, web-based question answering systems applied to the TREC 2005 Main Question Answering task, which use complementary models of answering questions over both structured and unstructured content on the Web are described.
Automatic Gap-fill Question Generation from Text Books
• Computer Science
BEA@ACL
• 2011
An automatic question generation system that can generate gap-fill questions for content in a document by first blanking keys from the sentences and then determining the distractors for these keys.