Semi-Supervised Consensus Labeling for Crowdsourcing

@inproceedings{Tang2011SemiSupervisedCL,
  title={Semi-Supervised Consensus Labeling for Crowdsourcing},
  author={Wei Tang},
  year={2011}
}
Because individual crowd workers often exhibit high variance in annotation accuracy, we often ask multiple crowd workers to label each example to infer a single consensus label. While simple majority vote computes consensus by equally weighting each worker’s vote, weighted voting assigns greater weight to more accurate workers, where accuracy is estimated by inner-annotator agreement (unsupervised) and/or agreement with known expert labels (supervised). In this paper, we investigate the… CONTINUE READING
Highly Cited
This paper has 99 citations. REVIEW CITATIONS

Citations

Publications citing this paper.

100 Citations

01020'12'14'16'18
Citations per Year
Semantic Scholar estimates that this publication has 100 citations based on the available data.

See our FAQ for additional information.

References

Publications referenced by this paper.
Showing 1-10 of 16 references

In NAACL-HLT Workshop on Creating Speech and Language Data with Amazon’s

t. . C. Grady, Catherine, Lease, Matthew
2010
View 1 Excerpt

Learning From Crowds

Journal of Machine Learning Research • 2010
View 1 Excerpt

Worker Evaluation in Crowdsourcing: Gold Data or Multiple Workers? http: //behind-the-enemy-lines.blogspot.com/2010/09/ worker-evaluation-in-crowdsourcing-gold.html

P. G. Ipeirotis
2010
View 1 Excerpt

Similar Papers

Loading similar papers…