Justin Kruger

Learn More
We investigate methods for aggregating the judgements of multiple individuals in a linguistic annotation task into a collective judgement. We define several aggregators that take the reliability of annotators into account and thus go beyond the commonly used majority vote, and we empirically analyse their performance on new datasets of crowdsourced data.
University. The authors contributed equally to this article. The authors thank the two anonymous JMR reviewers for their helpful comments during the review process. The article benefited from discussions with The authors find that exposure to different types of categories or assortments in a task creates a mind-set that changes how consumers process(More)
Crowdsourcing is an important tool, e.g., in computational linguistics and computer vision, to efficiently label large amounts of data using nonexpert annotators. The individual annotations collected need to be aggregated into a single collective annotation. The hope is that the quality of this collective annotation will be comparable to that of a(More)
  • 1