• Corpus ID: 211532238

Modelisation de l'incertitude et de l'imprecision de donnees de crowdsourcing : MONITOR

@inproceedings{Thierry2020ModelisationDL,
  title={Modelisation de l'incertitude et de l'imprecision de donnees de crowdsourcing : MONITOR},
  author={Constance Thierry and Jean-Christophe Dubois and Yolande Le Gall and Arnaud Martin},
  booktitle={EGC},
  year={2020}
}
Crowdsourcing is defined as the outsourcing of tasks to a crowd of contributors. The crowd is very diverse on these platforms and includes malicious contributors attracted by the remuneration of tasks and not conscientiously performing them. It is essential to identify these contributors in order to avoid considering their responses. As not all contributors have the same aptitude for a task, it seems appropriate to give weight to their answers according to their qualifications. This paper… 

Figures from this paper

References

SHOWING 1-9 OF 9 REFERENCES
Characterization of Experts in Crowdsourcing Platforms
TLDR
This work addresses the problem of identifying experts among participants, that is, workers, who tend to answer the questions correctly, and derives a measure that characterizes the expertise level of each participant, based on precise and exactitude degrees that represent two parts of the expertiselevel.
Handling query answering in crowdsourcing systems: A belief function-based approach
TLDR
This paper proposes a belief functions-based approach to achieve quality control of workers' responses in crowdsourcing and conducts some comprehensive experiments to validate the effectiveness of this proposal.
Measuring the Expertise of Workers for Crowdsourcing Applications
TLDR
This work proposes an innovative measure of expertise based on the definition of four factors with the theory of belief functions based on a dataset with an objective comparison of the items concerned and compares it to the Fagin distance from a real experiment.
The face of quality in crowdsourcing relevance labels: demographics, personality and labeling accuracy
Information retrieval systems require human contributed relevance labels for their training and evaluation. Increasingly such labels are collected under the anonymous, uncontrolled conditions of
A worker clustering-based approach of label aggregation under the belief function theory
TLDR
This paper proposes a new label aggregation technique that allows to determine workers qualities via a clustering process and then represent and combine their labels to estimate the final one under the belief function theory.
Annotation models for crowdsourced ordinal data
TLDR
This work proposes annotator models based on Receiver Operating Characteristic (ROC) curve analysis to consolidate the ordinal annotations from multiple annotat ors and indicates that the proposed algorithm is superior to the commonly used majorit y vo ing rule.
On the dempster-shafer framework and new combination rules
Modeling Uncertainty and Inaccuracy on Data from Crowdsourcing Platforms: MONITOR
TLDR
A new method for characterizing the profile of contributors and aggregating answers using the theory of belief functions to estimate uncertain and imprecise answers is proposed.
Absolute pitch.
TLDR
Although the etiology of AP is not yet completely understood, evidence points toward the early-learning theory, which states that AP can be learned by anyone during a limited period early in development, up to about age 6, after which a general developmental shift from perceiving individual features to perceiving relations among features makes AP difficult or impossible to acquire.