Breaking Monotony with Meaning: Motivation in Crowdsourcing Markets

  title={Breaking Monotony with Meaning: Motivation in Crowdsourcing Markets},
  author={Dana Chandler and Adam Kapelner},
Differentiating Types of Meaningfulness as Motivation for Crowdsourcing Participation and Performance
With the advent of powerful task performance platforms like Amazon’s Mechanical Turk (AMT), crowdsourcing has become a powerful means to address a variety of high-volume pragmatic problems, ranging
Curiosity Killed the Cat, but Makes Crowdwork Better
The potential for curiosity as a new type of intrinsic motivational driver to incentivize crowd workers is examined and design crowdsourcing task interfaces that explicitly incorporate mechanisms to induce curiosity and conduct a set of experiments on Amazon's Mechanical Turk.
Crowdsourcing performance evaluations of user interfaces
MTurk may be a productive setting for conducting performance evaluations of user interfaces providing a complementary approach to existing methodologies, and three previously well-studied user interface designs are evaluated.
Labor Allocation in Paid Crowdsourcing: Experimental Evidence on Positioning, Nudges and Prices
The evidence suggests that user interface and cognitive biases play an important role in online labor markets and that salience can be used by employers as a kind of "incentive multiplier".
Why Individuals Participate in Micro-task Crowdsourcing Work Environment: Revealing Crowdworkers' Perceptions
This study captures crowdworkers’ perceptions to explore the characteristics of the crowd workers, crowdsourcing jobs, and the crowdwork environment that collectively drive the crowdworkers to participate in open source work.
Context Disclosure as a Source of Player Motivation in Human Computation Games
  • Computer Science
  • 2019
A study carried out on Amazon’s Mechanical Turk (AMT) workers using the MATCHMAKERS HCG to provide insights into how context can be used to better motivate HCG players.
Personalized and Diverse Task Composition in Crowdsourcing
It is shown that while task throughput and worker retention are best with ranked lists, crowdwork quality reaches its best with CTs diversified by requesters, thereby confirming that workers look to expose their “good” work to many requesters.
The Face of Quality in Crowdsourcing Relevance Labels: Demographics, Personality and Labeling Accuracy
Information retrieval systems require human contributed relevance labels for their training and evaluation. Increasingly such labels are collected under the anonymous, uncontrolled conditions of
The face of quality in crowdsourcing relevance labels: demographics, personality and labeling accuracy
Information retrieval systems require human contributed relevance labels for their training and evaluation. Increasingly such labels are collected under the anonymous, uncontrolled conditions of
A Glimpse Far into the Future: Understanding Long-term Crowd Worker Accuracy
It is found that, contrary to these claims, workers are extremely stable in their accuracy over the entire period, and it is demonstrated that it is possible to predict workers’ long-term accuracy using just a glimpse of their performance on the first five tasks.


The labor economics of paid crowdsourcing
A model of workers supplying labor to paid crowdsourcing projects is presented and a novel method for estimating a worker's reservation wage - the key parameter in the labor supply model - is introduced.
The online laboratory: conducting experiments in a real labor market
The views on the potential role that online experiments can play within the social sciences are presented, and software development priorities and best practices are recommended.
What Do Laboratory Experiments Measuring Social Preferences Reveal About the Real World
A critical question facing experimental economists is whether behavior inside the laboratory is a good indicator of behavior outside the laboratory. To address that question, we build a model in
Field Experiments
Experimental economists are leaving the reservation. They are recruiting subjects in the field rather than in the classroom, using field goods rather than induced valuations, and using field context
Man's search for meaning: The case of Legos
A validation of Amazon Mechanical Turk for the collection of acceptability judgments in linguistic theory
A quantitative comparison of two identical acceptability judgment experiments, each with 176 participants, suggests that aside from slightly higher participant rejection rates, AMT data are almost indistinguishable from laboratory data.
Running Experiments on Amazon Mechanical Turk
textabstractAlthough Mechanical Turk has recently become popular among social scientists as a source of experimental data, doubts may linger about the quality of data provided by subjects recruited
Demographics of Mechanical Turk
We present the results of a survey that collected information about the demographics of participants on Amazon Mechanical Turk, together with information about their level of activity and motivation
The weirdest people in the world
Judgment and decision making.
  • B. Fischhoff
  • Psychology
    Wiley interdisciplinary reviews. Cognitive science
  • 2010
The study of judgment and decision making entails three interrelated forms of research: (1) normative analysis, identifying the best courses of action, given decision makers' values; (2) descriptive