• Corpus ID: 59155870

Who are the Turkers? Worker Demographics in Amazon Mechanical Turk

@inproceedings{Ross2009WhoAT,
  title={Who are the Turkers? Worker Demographics in Amazon Mechanical Turk},
  author={Joel Ross and Andrew Zaldivar and Lilly C. Irani and Bill Tomlinson},
  year={2009}
}
Amazon Mechanical Turk (MTurk) is a crowdsourcing system in which tasks are distributed to a population of thousands of anonymous workers for completion. This system is becoming increasingly popular with researchers and developers. In this paper, we survey MTurk workers about their demographic make-up and usage behavior. We find that this population is diverse across several notable demographic dimensions such as age, gender, and income, but is not precisely representative of the U.S. as a… 

Figures and Tables from this paper

Anatomy of a Crowdsourcing Platform - Using the Example of Microworkers.com
TLDR
An inside view of the usage data from Micro workers is given and it is shown that there are significant differences to the well studied MTurk.
Mechanical Turk and Financial Dependency on Crowdsourcing
TLDR
This paper investigates whether workers who are financially dependent on income from Mechanical Turk produce work of different quality than workers who is not financially dependenton Mechanical Turk.
Crowdsourcing for Language Resource Development: Critical Analysis of Amazon Mechanical Turk Overpowering Use
TLDR
This article is a position paper about crowdsourced microworking systems and especially Amazon Mechanical Turk, the use of which has been steadily growing in language processing in the past few years, and proposes practical and organizational solutions to improve new language resources development.
Series Human Cloud as Emerging Internet Application-Anatomy of the Microworkers Crowdsourcing Platform
TLDR
An inside view of Microworkers is given and it is shown that there are significant differences to the well studied MTurk.
Do Mechanical Turks dream of square pie charts?
TLDR
Amazon's Mechanical Turk is a web service that facilitates the assignment of small, web-based tasks to a large pool of anonymous workers to conduct perception and cognition studies, one of which was identical to a previous study performed in the lab.
Crowdsourcing for Language Resource Development: Criticisms About Amazon Mechanical Turk Overpowering Use
TLDR
This article is a position paper about Amazon Mechanical Turk, the use of which has been steadily growing in language processing in the past few years, and proposes practical and organizational solutions in order to improve language resources development.
Not All HITs Are Created Equal: Controlling for Reasoning and Learning Processes in MTurk
Challenges of crowdsourcing human-computer interaction (HCI) experiments on Amazon’s Mechanical Turk include risks posed by the combination of low monetary rewards and worker anonymity. These include
Crowd Coach
TLDR
Crowd Coach, a system that enables workers to receive peer coaching while on the job, is presented and it is found that Crowd Coach enhances workers' speed without sacrificing their work quality, especially in audio transcription tasks.
WiseMarket: a new paradigm for managing wisdom of online social users
TLDR
This paper presents Wise Market as an effective framework for crowdsourcing on social media that motivates users to participate in a task with care and correctly aggregates their opinions on pairwise choice problems and proposes exact algorithms for calculating the market confidence and the expected cost in a Wise Market with n investors.
...
...

References

SHOWING 1-10 OF 12 REFERENCES
Crowdsourcing user studies with Mechanical Turk
TLDR
Although micro-task markets have great potential for rapidly collecting user measurements at low costs, it is found that special care is needed in formulating tasks in order to harness the capabilities of the approach.
Moving the crowd at iStockphoto: The composition of the crowd and motivations for participation in a crowdsourcing application
TLDR
Results indicate that the desire to make money, develop individual skills, and to have fun were the strongest motivators for participation at iStockphoto, and that the crowd at i Stockphoto is quite homogenous and elite.
Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks
TLDR
This work explores the use of Amazon's Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors over the Web, and proposes a technique for bias correction that significantly improves annotation quality on two tasks.
Towards a model of understanding social search
TLDR
This work has integrated models from previous work in sensemaking and information seeking behavior to present a canonical social model of user activities before, during, and after search, suggesting where in the search process both explicitly and implicitly shared information may be valuable to individual searchers.
Learning to Trust the Crowd: Some Lessons from Wikipedia
  • F. Olleros
  • Art
    2008 International MCETECH Conference on e-Technologies (mcetech 2008)
  • 2008
TLDR
Concerns about Wikipedia's quality and sustainable success have to be tempered by the fact that Wikipedia is in the process of redefining the pertinent dimensions of quality and value for general encyclopedias.
Labeling images with a computer game
TLDR
A new interactive system: a game that is fun and can be used to create valuable output that addresses the image-labeling problem and encourages people to do the work by taking advantage of their desire to be entertained.
Utility data annotation with Amazon Mechanical Turk
  • A. Sorokin, D. Forsyth
  • Biology
    2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
  • 2008
TLDR
This work shows how to outsource data annotation to Amazon Mechanical Turk, and describes results for several different annotation problems, including some strategies for determining when the task is well specified and properly priced.
Crowdsourcing for relevance evaluation
TLDR
A new approach to evaluation called TERC is described, based on the crowdsourcing paradigm, in which many online users, drawn from a large community, each performs a small evaluation task.
AI Gets a Brain
In the 50 years since John McCarthy coined the term artificial intelligence, much progress has been made toward identifying, understanding, and automating many classes of symbolic and computational
Mechanical Turk: The Demographics. A Computer Scientist in a Business School
  • 2008
...
...