Soylent: a word processor with a crowd inside

@article{Bernstein2010SoylentAW,
  title={Soylent: a word processor with a crowd inside},
  author={Michael S. Bernstein and Greg Little and Rob Miller and Bj{\"o}rn Hartmann and Mark S. Ackerman and David R Karger and David Crowell and Katrina Panovich},
  journal={Proceedings of the 23nd annual ACM symposium on User interface software and technology},
  year={2010}
}
This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. [] Key Method To improve worker quality, we introduce the Find-Fix-Verify crowd programming pattern, which splits tasks into a series of generation and review stages. Evaluation studies demonstrate the feasibility of crowdsourced editing and investigate questions of reliability, cost, wait time, and work time for edits.

Figures and Tables from this paper

Mechanical Novel: Crowdsourcing Complex Work through Reflection and Revision

TLDR
This work proposes a technique for achieving interdependent complex goals with crowds, and embodies it in Mechanical Novel, a system that crowdsources short fiction stories on Amazon Mechanical Turk.

Supporting Collaborative Writing with Microtasks

This paper presents the MicroWriter, a system that decomposes the task of writing into three types of microtasks to produce a single report: 1) generating ideas, 2) labeling ideas to organize them,

CrowdSummary : Crowdsourced Abstractive Summary Generation with an Intelligent Interface

TLDR
This project aims at building a crowd-computer hybrid system that first locates important sentences using statistical natural language processing and then recruits crowd workers to combine these sentences to generate high-quality abstractive summaries for users in a cheap and affordable manner.

Real-time crowd control of existing interfaces Citation

TLDR
Legion is introduced, a system that allows end users to easily capture existing GUIs and outsource them for collaborative, real-time control by the crowd and further validate Legion by exploring the space of novel applications that it enables.

HowWeWrite with Crowds

Writing is a common task for crowdsourcing researchers exploring complex and creative work. To better understand how we write with crowds, we conducted both a literature review of crowd-writing

Supporting ESL Writing by Prompting Crowdsourced Structural Feedback

TLDR
This work proposes StructFeed, which allows native speakers to annotate topic sentence and relevant keywords in texts and generate writing hints based on the principle of paragraph unity, and compared the crowd-based method with three machine learning methods and got the best performance.

Crowdsourcing Accurate and Creative Word Problems and Hints

TLDR
This work builds upon successful task design factors in prior work and runs a series of iterative studies, incrementally adding different worker-support elements, showing that successive task designs improved accuracy and creativity.

Crowdsourcing Natural Language Data at Scale: A Hands-On Tutorial

TLDR
This tutorial will make an introduction to data labeling via public crowdsourcing marketplaces and will present the key components of efficient label collection.

ReTool: Interactive Microtask and Workflow Design through Demonstration

TLDR
ReTool is presented, a web-based tool for requesters to design and publish interactive microtasks and workflows and evaluated against a task-design tool from a popular crowdsourcing platform and showed the advantages of ReTool over the existing approach.

OpinionBlocks: A Crowd-Powered, Self-improving Interactive Visual Analytic System for Understanding Opinion Text

TLDR
Through two crowd-sourced studies on Amazon Mechanical Turk involving 101 users, OpinionBlocks demonstrates its effectiveness in helping users perform real-world opinion analysis tasks and shows that the crowd is willing to correct analytic errors, and the corrections help improve user task completion time significantly.
...

References

SHOWING 1-10 OF 41 REFERENCES

Coordinating tasks on the commons: designing for personal goals, expertise and serendipity

How is work created, assigned, and completed on large-scale, crowd-powered systems like Wikipedia? And what design principles might enable these federated online systems to be more effective? This

Crowdsourcing user studies with Mechanical Turk

TLDR
Although micro-task markets have great potential for rapidly collecting user measurements at low costs, it is found that special care is needed in formulating tasks in order to harness the capabilities of the approach.

Cheap and Fast – But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks

TLDR
This work explores the use of Amazon's Mechanical Turk system, a significantly cheaper and faster method for collecting annotations from a broad base of paid non-expert contributors over the Web, and proposes a technique for bias correction that significantly improves annotation quality on two tasks.

Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution

TLDR
It is concluded that in a relevance categorization task, a uniform distribution of labels across training data labels produces optimal peaks in 1) individual worker precision and 2) majority voting aggregate result accuracy.

Summarization beyond sentence extraction: A probabilistic approach to sentence compression

Who are the crowdworkers?: shifting demographics in mechanical turk

TLDR
How the worker population has changed over time is described, shifting from a primarily moderate-income, U.S. based workforce towards an increasingly international group with a significant population of young, well-educated Indian workers.

Koala: capture, share, automate, personalize business processes on the web

TLDR
Koala is a collaborative programming-by-demonstration system that records, edits, and plays back user interactions as pseudo-natural language scripts that are both human- and machine-interpretable.

VizWiz: nearly real-time answers to visual questions

TLDR
VizWiz is introduced, a talking application for mobile phones that offers a new alternative to answering visual questions in nearly real-time - asking multiple people on the web to support answering questions quickly.

Crowdsourcing graphical perception: using mechanical turk to assess visualization design

TLDR
The viability of Amazon's Mechanical Turk as a platform for graphical perception experiments is assessed and cost and performance data are reported and recommendations for the design of crowdsourced studies are distill.

Human computation: a survey and taxonomy of a growing field

TLDR
This work classifies human computation systems to help identify parallels between different systems and reveal "holes" in the existing work as opportunities for new research.