CrowdScape: interactively visualizing user behavior and output

@article{Rzeszotarski2012CrowdScapeIV,
  title={CrowdScape: interactively visualizing user behavior and output},
  author={Jeffrey M. Rzeszotarski and Aniket Kittur},
  journal={Proceedings of the 25th annual ACM symposium on User interface software and technology},
  year={2012}
}
  • Jeffrey M. Rzeszotarski, A. Kittur
  • Published 7 October 2012
  • Computer Science
  • Proceedings of the 25th annual ACM symposium on User interface software and technology
Crowdsourcing has become a powerful paradigm for accomplishing work quickly and at scale, but involves significant challenges in quality control. Researchers have developed algorithmic quality control approaches based on either worker outputs (such as gold standards or worker agreement) or worker behavior (such as task fingerprinting), but each approach has serious limitations, especially for complex or creative work. Human evaluation addresses these limitations but does not scale well with… 

Figures from this paper

Quality Control in Crowdsourcing
TLDR
This survey derives a quality model for crowdsourcing tasks, identifies the methods and techniques that can be used to assess the attributes of the model, and the actions and strategies that help prevent and mitigate quality problems.
Sprout: Crowd-Powered Task Design for Crowdsourcing
TLDR
A novel meta-workflow is proposed that helps requesters optimize crowdsourcing task designs and Sprout, the open-source tool, which implements this workflow, improves task designs by eliciting points of confusion from crowd workers, enabling requesters to quickly understand these misconceptions and the overall space of questions.
Quality Management in Crowdsourcing using Gold Judges Behavior
TLDR
This paper compares the behaviors of trained professional judges and crowd workers and uses the trained judges' behavior signals as gold behavior to train a classifier to detect poorly performing crowd workers, and shows that classification accuracy almost doubles in some tasks with the use of gold behavior data.
Crowdsourced Data Management: Industry and Academic Perspectives
TLDR
Crowdsourced Data Management: Industry and Academic Perspectives simultaneously introduces academics to real problems that practitioners encounter every day, and provides a survey of the state of the art in crowd-powered algorithms and system design tailored to large-scale data processing.
Crowd Worker Strategies in Relevance Judgment Tasks
TLDR
This paper delves into the techniques and tools that highly experienced crowd workers use to be more efficient in completing crowdsourcing micro-tasks, and highlights the presence of frequently used shortcut patterns that can speed-up task completion, thus increasing the hourly wage of efficient workers.
Influencing and Measuring Behaviour in Crowdsourced Activities
TLDR
The goal of this chapter is to clearly enumerate the difficulties of crowdsourcing psychometric data and to explore how, with careful planning and execution, these limitations can be overcome.
Information Visualization Evaluation Using Crowdsourcing
TLDR
A review of the use of crowdsourcing for evaluation in visualization research, which analyzed 190 crowdsourcing experiments reported in 82 papers that were published in major visualization conferences and journals between 2006 and 2017.
It's getting crowded!: improving the effectiveness of microtask crowdsourcing
TLDR
A two-level categorization scheme for microtasks is proposed and revealed insights into the task affinity of workers, effort exerted to complete tasks of various types, and worker satisfaction with the monetary incentives, and a novel model for task clarity based on the goal and role clarity constructs are proposed.
Explorer Information Visualization Evaluation Using Crowdsourcing
TLDR
A review of the use of crowdsourcing for evaluation in visualization research, which analyzed 190 crowdsourcing experiments reported in 82 papers that were published in major visualization conferences and journals between 2006 and 2017.
Detecting low-quality crowdtesting workers
TLDR
This paper proposes to use finer-grained cursor trajectory analysis, including submovement analysis, to identify low quality workers and shows that the successful rate in detecting low-quality workers is around 80%.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 20 REFERENCES
CrowdWeaver: visually managing complex crowd work
TLDR
CrowdWeaver is presented, a system to visually manage complex crowd work that supports the creation and reuse of crowdsourcing and computational tasks into integrated task flows, manages the flow of data between tasks, and allows tracking and notification of task progress, with support for real-time modification.
CrowdForge: crowdsourcing complex work
TLDR
This work presents a general purpose framework for accomplishing complex and interdependent tasks using micro-task markets, a web-based prototype, and case studies on article writing, decision making, and science journalism that demonstrate the benefits and limitations of the approach.
Instrumenting the crowd: using implicit behavioral measures to predict task performance
TLDR
The technique captures behavioral traces from online crowd workers and uses them to predict outcome measures such quality, errors, and the likelihood of cheating, and it is indicated that these models generalize to related tasks.
Shepherding the crowd yields better work
TLDR
This paper investigates whether timely, task-specific feedback helps crowd workers learn, persevere, and produce better results in micro-task platforms by discussing interaction and infrastructure approaches for integrating real-time assessment into online work.
Crowdsourcing user studies with Mechanical Turk
TLDR
Although micro-task markets have great potential for rapidly collecting user measurements at low costs, it is found that special care is needed in formulating tasks in order to harness the capabilities of the approach.
Soylent: a word processor with a crowd inside
TLDR
S soylent, a word processing interface that enables writers to call on Mechanical Turk workers to shorten, proofread, and otherwise edit parts of their documents on demand, and the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages.
Collaboratively crowdsourcing workflows with turkomatic
TLDR
It is argued that Turkomatic's collaborative approach can be more successful than the conventional workflow design process and implications for the design of collaborative crowd planning systems are discussed.
Vox Populi: Collecting High-Quality Labels from a Crowd
TLDR
This paper studies the problem of pruning low-quality teachers in a crowd, in order to improve the label quality of the training set, and shows that this is in fact achievable with a simple and efficient algorithm, which does not require that each example be repeatedly labeled by multiple teachers.
Quality management on Amazon Mechanical Turk
TLDR
This work presents algorithms that improve the existing state-of-the-art techniques, enabling the separation of bias and error, and illustrates how to incorporate cost-sensitive classification errors in the overall framework and how to seamlessly integrate unsupervised and supervised techniques for inferring the quality of the workers.
Generating Creative Ideas Through Crowds: An Experimental Study of Combination
TLDR
A new way of organizing the crowd to produce new ideas is discussed: an idea generation system using combination in which participants synthesize new designs from the efforts of their peers.
...
1
2
...