A CALL System for Learning Preposition Usage

@inproceedings{Lee2016ACS,
  title={A CALL System for Learning Preposition Usage},
  author={John Sie Yuen Lee and Donald Sturgeon and Mengqi Luo},
  booktitle={ACL},
  year={2016}
}
Fill-in-the-blank items are commonly featured in computer-assisted language learning (CALL) systems. An item displays a sentence with a blank, and often proposes a number of choices for filling it. These choices should include one correct answer and several plausible distractors. We describe a system that, given an English corpus, automatically generates distractors to produce items for preposition usage. We report a comprehensive evaluation on this system, involving both experts and learners… 

Figures and Tables from this paper

Personalized Exercises for Preposition Learning

A computer-assisted language learning (CALL) system that generates fill-in-the-blank items for preposition usage that progresses from easier to harder sentences, to minimize any hindrance on preposition learning that might be posed by difficult vocabulary.

Automatic Generation of Multiple-Choice Items for Prepositions Based on Word2vec

The experimental results show that the approach proposed can generate preposition MIs targeting learners at different levels and the results of distractor plausibility and reliability also point to the validity of the approach.

AGReE: A system for generating Automated Grammar Reading Exercises

We describe the AGR E E system, which takes user-submitted passages as input and automatically generates grammar practice exercises that can be completed while reading. Multiple-choice practice items

Distractor Generation for Chinese Fill-in-the-blank Items

This paper investigates the quality of distractors generated by a number of criteria, including part-of-speech, difficulty level, spelling, word co-occurrence and semantic similarity, and yields distractors that are significantly more plausible than those generated by baseline methods.

Difficulty-aware Distractor Generation for Gap-Fill Items

Experiments show that BERT outperforms semantic similarity measures, in terms of both correlation to human judgment and classification accuracy of distractor plausibility, and the use of a neural language model to rank distractors in Terms of difficulty.

References

SHOWING 1-10 OF 26 REFERENCES

Gap-fill Tests for Language Learners: Corpus-Driven Item Generation

A system, TEDDCLOG, which automatically generates draft test items from a corpus, which takes the key (the word which will form the correct answer to the exercise) as input and presents the sentences and distractors to the user for approval, modification or rejection.

Automatic generation of cloze items for prepositions

This paper proposes two methods, based on collocations and on non-native English corpora, to generate distractors for prepositions, found to be more successful in attracting users than a baseline that relies only on word frequency, a common criterion in past research.

Automatic Generation of Challenging Distractors Using Context-Sensitive Inference Rules

This work proposes to employ context-sensitive lexical inference rules in order to generate distractors that are semantically similar to the gap target word in some sense, but not in the particular sense induced by the gap-fill context.

Automatic Question Generation for Vocabulary Assessment

Experimental results suggest that these automatically-generated questions give a measure of vocabulary skill that correlates well with subject performance on independently developed human-written questions and strong correlations with standardized vocabulary tests point to the validity of this approach to automatic assessment of word knowledge.

FollowYou!: An Automatic Language Lesson Generation System

The idea is to transform any text theLearner would like to read into a format as the learner would have seen in a textbook, with similar supporting materials to help the learners digest the text.

Applications of Lexical Information for Algorithmically Composing Multiple-Choice Cloze Items

By providing both reading and listening cloze items, this work would like to offer a somewhat adaptive system for assisting Taiwanese children in learning English vocabulary.

FAST – An Automatic Generation System for Grammar Tests

This paper introduces a method for the semi-automatic generation of grammar test items by applying Natural Language Processing (NLP) techniques, and describes a prototype system FAST (Free Assessment of Structural Tests).

Using Error-Annotated ESL Data to Develop an ESL Error Correction System

This paper trains a classifier on a large-scale, error-taggedcorpus of English essays, relying on contextual and grammat ical features surrounding preposition usage, to show that this model outperforms models trained on well-edited text produced by native speakers of English.

Measuring Non-native Speakers’ Proficiency of English by Using a Test with Automatically-Generated Fill-in-the-Blank Questions

The proposed method provides teachers and testers with a tool that reduces time and expenditure for testing English proficiency, and the number of questions can be reduced by using item information in IRT.

Automatic Linguistic Annotation ofLarge Scale L2 Databases: The EF-Cambridge Open Language Database(EFCamDat)

A new English L2 database, the EF Cambridge Open Language Database, henceforth EFCAMDAT is introduced, developed by the Department of Theoretical and Applied Linguistics at the University of Cambridge in collaboration with EF Education First, an international educational organization.