Practice of Efficient Data Collection via Crowdsourcing: Aggregation, Incremental Relabelling, and Pricing

@article{Drutsa2020PracticeOE,
  title={Practice of Efficient Data Collection via Crowdsourcing: Aggregation, Incremental Relabelling, and Pricing},
  author={Alexey Drutsa and Valentina Fedorova and Dmitry Ustalov and Olga Megorskaya and Evfrosiniya Zerminova and Daria Baidakova},
  journal={Proceedings of the 13th International Conference on Web Search and Data Mining},
  year={2020}
}
In this tutorial, we present a portion of unique industry experience in efficient data labelling via crowdsourcing shared by both leading researchers and engineers from Yandex. We will make an introduction to data labelling via public crowdsourcing marketplaces and will present key components of efficient label collection. This will be followed by a practice session, where participants will choose one of the real label collection tasks, experiment with selecting settings for the labelling… 

Topics from this paper

Crowdsourcing Practice for Efficient Data Labeling: Aggregation, Incremental Relabeling, and Pricing
TLDR
This tutorial will make an introduction to data labeling via public crowdsourcing marketplaces and will present the key components of efficient label collection and the major theoretical results in efficient aggregation, incremental relabeling, and dynamic pricing.
Aggregation Techniques in Crowdsourcing: Multiple Choice Questions and Beyond
TLDR
This tutorial aims to present common and recent label aggregation techniques for multiple-choice questions, multi-class labels, ratings, pairwise comparison, and image/text annotation.
CrowdSpeech and VoxDIY: Benchmark Datasets for Crowdsourced Audio Transcription
TLDR
A principled pipeline for constructing datasets of crowdsourced audio transcriptions in any novel domain is designed and its applicability on an under-resourced language is shown by constructing VOXDIY — a counterpart of CROWDSPEECH for the Russian language.
Prediction of Hourly Earnings and Completion Time on a Crowdsourcing Platform
TLDR
The solution to the problem of predicting user performance is found that demonstrates improvement of prediction quality by up to 25% for hourly earnings and up to $32%$ completion time w.r.t. a naive baseline which is based solely on historical performance of users on tasks.
Random Sampling-Arithmetic Mean: A Simple Method of Meteorological Data Quality Control Based on Random Observation Thought
TLDR
Experimental results show that the proposed random sampling-arithmetic mean (RS-AM) method can effectively solve the problem of data observation quality and, compared with the conflict resolution on heterogeneous data (CRH) method, the RS-AM method reduces 1.5% on MSE and 2.9% on RMSE while ensuring the error rate is basically the same.

References

SHOWING 1-10 OF 44 REFERENCES
Practice of Efficient Data Collection via Crowdsourcing at Large-Scale
TLDR
An introduction to data labeling via public crowdsourcing marketplaces and key components of efficient label collection are presented and rich industrial experiences of applying these algorithms and constructing large-scale label collection pipelines are shared.
Approval Voting and Incentives in Crowdsourcing
TLDR
This article introduces approval voting to utilize the expertise of workers who have partial knowledge of the true answer and coupling it with two strictly proper scoring rules, and establishes attractive properties of optimality and uniqueness of the scoring rules.
Exploiting Commonality and Interaction Effects in Crowdsourcing Tasks Using Latent Factor Models
Crowdsourcing services such as the Amazon Mechanical Turk [1] are increasingly being used to annotate large datasets for machine learning and data mining applications. The crowdsourced data labels
Analysis of Minimax Error Rate for Crowdsourcing and Its Application to Worker Clustering Model
TLDR
A minimax error rate is derived under more practical setting for a broader class of crowdsourcing models that includes the Dawid and Skene model as a special case and a worker clustering model is proposed, which is more practical than the DS model under real crowdsourcing settings.
No Oops, You Won't Do It Again: Mechanisms for Self-correction in Crowdsourcing
TLDR
This work proposes a two-stage setting for crowdsourcing where the worker first answers the questions, and is then allowed to change her answers after looking at a (noisy) reference answer, and develops mechanisms to incentivize workers to act appropriately.
Quality-Based Pricing for Crowdsourced Workers
The emergence of online paid crowdsourcing platforms, such as Amazon Mechanical Turk (AMT), presents us huge opportunities to distribute tasks to human workers around the world, on-demand and at
How Many Workers to Ask?: Adaptive Exploration for Collecting High Quality Labels
TLDR
This paper conducts a data analysis on an industrial crowdsourcing platform, and uses the observations from this analysis to design new stopping rules that use the workers' quality scores in a non-trivial manner.
Pairwise ranking aggregation in a crowdsourced setting
TLDR
This work proposes a new model to predict a gold-standard ranking that hinges on combining pairwise comparisons via crowdsourcing and formalizes this as an active learning strategy that incorporates an exploration-exploitation tradeoff and implements it using an efficient online Bayesian updating scheme.
Community-based bayesian aggregation models for crowdsourcing
TLDR
A novel community-based Bayesian label aggregation model, CommunityBCC, which assumes that crowd workers conform to a few different types, where each type represents a group of workers with similar confusion matrices, and consistently outperforms state-of-the-art label aggregation methods.
Regularized Minimax Conditional Entropy for Crowdsourcing
TLDR
This paper proposes a minimax conditional entropy principle to infer ground truth from noisy crowdsourced labels, and derives a unique probabilistic labeling model jointly parameterized by worker ability and item difficulty.
...
1
2
3
4
5
...