Optimising crowdsourcing efficiency: Amplifying human computation with validation

@article{Chamberlain2018OptimisingCE,
  title={Optimising crowdsourcing efficiency: Amplifying human computation with validation},
  author={Jon Chamberlain and Udo Kruschwitz and Massimo Poesio},
  journal={it - Information Technology},
  year={2018},
  volume={60},
  pages={41 - 49}
}
Abstract Crowdsourcing has revolutionised the way tasks can be completed but the process is frequently inefficient, costing practitioners time and money. This research investigates whether crowdsourcing can be optimised with a validation process, as measured by four criteria: quality; cost; noise; and speed. A validation model is described, simulated and tested on real data from an online crowdsourcing game to collect data about human language. Results show that by adding an agreement… 

Figures and Tables from this paper

Speaking Outside the Box: Exploring the Benefits of Unconstrained Input in Crowdsourcing and Citizen Science Platforms

This paper explores how crowdsourcing and citizen science systems collect data and complete tasks, illustrated by a case study from the online language game-with-a-purpose Phrase Detectives.

A Review of Mobile Crowdsourcing Architectures and Challenges: Toward Crowd-Empowered Internet-of-Things

An extensive survey of the literature on mobile crowdsourcing research is provided, highlighting the aspects of particular concerns in terms of implementation needs during the development, architectures, and key considerations for their development and presents a taxonomy based on the key issues in mobile crowds sourcing.

Designing for Collective Intelligence and Community Resilience on Social Networks

Case studies of this phenomenon of groups of users gathering around a central theme and working together to solve problems, complete tasks and develop social connections are explored, and a framework for lightweight engagement using existing platforms and social networks is proposed.

Cipher: A Prototype Game-with-a-Purpose for Detecting Errors in Text

A prototype computer game called Cipher was developed that encourages people to identify errors in text by introducing the idea of steganography as the entertaining game element and people play the game for entertainment while they make valuable annotations to locate text errors.

References

SHOWING 1-10 OF 22 REFERENCES

Analyzing costs and accuracy of validation mechanisms for crowdsourcing platforms

Turkomatic: automatic recursive task and workflow design for mechanical turk

This work presents a new method for automating task and workflow design for high-level, complex tasks, which is recursive, recruiting workers from the crowd to help plan out how problems can be solved most effectively.

Groupsourcing: Distributed Problem Solving Using Social Networks

A method for archiving social network messages and investigates messages containing an image classification task in the domain of marine biology.

Soylent: a word processor with a crowd inside

S soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand, and the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages.

Quality Control of Crowd Labeling through Expert Evaluation

It is argued that injecting a little expertise in the labeling process, will significantly improve the accuracy of the labeling task and give better quality labels than majority voting and other state-of-art methods.

TurKit: human computation algorithms on mechanical turk

This work presents the crash-and-rerun programming model that makes TurKit possible, along with a variety of applications for human computation algorithms, and presents case studies of TurKit used for real experiments across different fields.

Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks

This work explores the use of Amazon's Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors over the Web, and proposes a technique for bias correction that significantly improves annotation quality on two tasks.

Building reputation in StackOverflow: An empirical investigation

The results indicate that the following activities can help to build reputation quickly: answering questions related to tags with lower expertise density, answering questions promptly, being the first one to answer a question, being active during off peak hours, and contributing to diverse areas.

Games with a Purpose (Gwaps): Lafourcade/Games with a Purpose (Gwaps)

Human brains can be seen as knowledge processors in a distributed system. Each of them can achieve, conscious or not, a small part of a treatment too important to be done by one. These are also…

Games with a purpose for social networking platforms

This paper presents an application framework to develop interactive games with a purpose on top of social networking platforms, suitable for deployment in both mobile and Web-based environments.