Learn More
This study exploits statistical redundancy inherent in natural language to automatically predict scores for essays. We use a hybrid feature identification method, including syntactic structure analysis, rhetorical structure analysis, and topical analysis, to score essay responses from test-takers of the Graduate Management Admissions Test (GMAT) and the(More)
A utomated essay-scoring technologies can enhance both large-scale assessment and classroom instruction. Essay evaluation software not only numerically rates essays but also analyzes grammar, usage, mechanics, and discourse structure. 1,2 In the classroom, such applications can supplement traditional instruction by giving students automated feedback that(More)
Criterion SM Online Essay Evaluation Service includes a capability that labels sentences in student writing with essay-based discourse elements (e.g., thesis statements). We describe a new system that enhances Criterion's capability , by evaluating multiple aspects of coherence in essays. This system identifies features of sentences based on semantic(More)
This paper describes a deployed educational technology application: the Criterion SM Online Essay Evaluation Service, a web-based system that provides automated scoring and evaluation of student essays. Criterion has two complementary applications: E-rater®, an automated essay scoring system and Critique Writing Analysis Tools, a suite of programs that(More)
We show how the Barzilay and Lapata entity-based coherence algorithm (2008) can be applied to a new, noisy data domain – student essays. We demonstrate that by combining Barzilay and Lapata's entity-based features with novel features related to grammar errors and word usage, one can greatly improve the performance of automated coherence prediction for(More)
We introduce a cognitive framework for measuring reading comprehension that includes the use of novel summary writing tasks. We derive NLP features from the holistic rubric used to score the summaries written by students for such tasks and use them to design a preliminary, automated scoring system. Our results show that the automated approach performs well(More)
• Abstract Electronic Essay Rater (e-rater) is a prototype automated essay scoring system built at Educational Testing Service (ETS) that uses discourse marking, in addition to syntactic information and topical content vector analyses to automatically assign essay scores. This paper gives a general description ore-rater as a whole, but its emphasis is on(More)