Bootstrapping Multiple-Choice Tests with The-Mentor

@inproceedings{Mendes2011BootstrappingMT,
  title={Bootstrapping Multiple-Choice Tests with The-Mentor},
  author={Ana Cristina Mendes and S{\'e}rgio Curto and Lu{\'i}sa Coheur},
  booktitle={CICLing},
  year={2011}
}
It is very likely that, at least once in their lifetime, everyone has answered a multiple-choice test. Multiple-choice tests are considered an effective technique for knowledge assessment, requiring a short response time and with the possibility of covering a broad set of topics. Nevertheless, when it comes to their creation, it can be a time-consuming and labour-intensive task. Here, the generation of multiple-choice tests aided by computer can reduce these drawbacks: to the human assessor is… 
Question Generation based on Lexico-Syntactic Patterns Learned from the Web
TLDR
The question generation task as performed by T- Mand is detail and several techniques are applied in order to discard low quality items.
Exploring linguistically-rich patterns for question generation
TLDR
The impact of varying several parameters during pattern learning and matching in the Question Generation task is discussed and semantics is introduced by means of named entities in the authors' lexico-syntactic patterns.
Sistema de evaluación para la formación a distancia de profesionales
espanolLa formacion a distancia en el campo de la ingenieria conlleva la necesidad de definir un sistema de autoevlacion, que permita al alumno asegurarse que va alcanzando los objetivos y la
An Evaluation Framework and Instrument for Evaluating e-Assessment Tools
TLDR
This research uses literature and a series of six empirical action research studies to develop an evaluation framework of categories and criteria called SEAT (Selecting and Evaluating e-Assessment Tools).
A minimally supervised approach for question generation : what can we learn from a single seed ?
TLDR
In this paper, how many quality natural language questions can be generated from a single question/answer pair (a seed) is investigated and patterns that relate the various levels of linguistic information in the question/ answer seed with the same levels of information in text are learned.

References

SHOWING 1-10 OF 23 REFERENCES
A computer-aided environment for generating multiple-choice test items
TLDR
A novel computer-aided procedure for generating multiple-choice test items from electronic documents that makes use of language resources such as corpora and ontologies, and saves both time and production costs.
A Real-Time Multiple-Choice Question Generation For Language Testing: A Preliminary Study
TLDR
This paper has developed a real-time system which generates questions on English grammar and vocabulary from on-line news articles using basic machine learning algorithms as Naive Bayes and K-Nearest Neighbors.
Computer-aided generation of multiple-choice tests
  • R. MitkovL. Ha
  • Computer Science
    International Conference on Natural Language Processing and Knowledge Engineering, 2003. Proceedings. 2003
  • 2003
TLDR
The results from the conducted evaluation suggest that the new procedure is very effective saving time and labour considerably and that the test items produced with the help of the program are not of inferior quality to those produced manually.
Measuring Non-native Speakers’ Proficiency of English by Using a Test with Automatically-Generated Fill-in-the-Blank Questions
TLDR
The proposed method provides teachers and testers with a tool that reduces time and expenditure for testing English proficiency, and the number of questions can be reduced by using item information in IRT.
A Selection Strategy to Improve Cloze Question Quality
We present a strategy to improve the quality of automatically generated cloze and open cloze questions which are used by the REAP tutoring system for assessment in the ill-defined domain of English
Genetic Algorithms for Data-Driven Web Question Answering
TLDR
An evolutionary approach for the computation of exact answers to natural languages (NL) questions by searching for those substrings in the snippets whose contexts are most similar to contexts of already known answers.
Patterns of Potential Answer Expressions as Clues to the Right Answers
TLDR
The participation at TREC-10 was a test for some basic mechanisms of the text processing technology developed in the framework of the CrossReader project and these mechanisms will be implemented in the new TextRoller versions.
QuestionBank: Creating a Corpus of Parse-Annotated Questions
TLDR
Using QuestionBank training data improves parser performance to 89.75% labelled bracketing f-score, and a new method for recovering empty nodes and their antecedents (capturing long distance dependencies) from parser output in CFG trees using LFG f-structure reentrancies is introduced.
FAST – An Automatic Generation System for Grammar Tests
TLDR
This paper introduces a method for the semi-automatic generation of grammar test items by applying Natural Language Processing (NLP) techniques, and describes a prototype system FAST (Free Assessment of Structural Tests).
Learning Question Classifiers
TLDR
A hierarchical classifier is learned that is guided by a layered semantic hierarchy of answer types, and eventually classifies questions into fine-grained classes.
...
...