Cheap and Fast - But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks

Abstract

Human linguistic annotation is crucial for many natural language processing tasks but can be expensive and time-consuming. We explore the use of Amazon’s Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors over the Web. We investigate five tasks: affect recognition, word similarity, recognizing textual entailment, event temporal ordering, and word sense disambiguation. For all five, we show high agreement between Mechanical Turk non-expert annotations and existing gold standard labels provided by expert labelers. For the task of affect recognition, we also show that using non-expert labels for training machine learning algorithms can be as effective as using gold standard annotations from experts. We propose a technique for bias correction that significantly improves annotation quality on two tasks. We conclude that many large labeling tasks can be effectively designed and carried out in this method at a fraction of the usual expense.

View Slides

Extracted Key Phrases

11 Figures and Tables

01002002008200920102011201220132014201520162017
Citations per Year

1,677 Citations

Semantic Scholar estimates that this publication has 1,677 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@inproceedings{Snow2008CheapAF, title={Cheap and Fast - But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks}, author={Rion Snow and Brendan T. O'Connor and Daniel Jurafsky and Andrew Y. Ng}, booktitle={EMNLP}, year={2008} }