Selfsourcing personal tasks

@article{Teevan2014SelfsourcingPT,
  title={Selfsourcing personal tasks},
  author={Jaime Teevan and Daniel J. Liebling and Walter S. Lasecki},
  journal={CHI '14 Extended Abstracts on Human Factors in Computing Systems},
  year={2014}
}
Large tasks can be overwhelming. For example, many people have thousands of digital photographs that languish in unorganized archives because it is difficult and time consuming to gather them into meaningful collections. Such tasks are hard to start because they seem to require long uninterrupted periods of effort to make meaningful progress. We propose the idea of selfsourcing as a way to help people to perform large personal information tasks by breaking them into manageable microtasks. Using… 

Figures from this paper

Kurator: Using The Crowd to Help Families With Personal Curation Tasks
TLDR
Kurator, a hybrid intelligence system leveraging mixed-expertise crowds to help families curate their personal digital content, produces a refined set of content via a combination of automated systems able to scale to large data sets and human crowds able to understand the data.
Break It Down: A Comparison of Macro- and Microtasks
TLDR
It is found that breaking these tasks into microtasks results in longer overall task completion times, but higher quality outcomes and a better experience that may be more resilient to interruptions, suggesting that microt tasks can help people complete high quality work in interruption-driven environments.
Microtask Detection
TLDR
This article introduces the novel challenge of microtask detection, and it presents machine-learned models for automatically determining which tasks are actionable and which of these actionable tasks are microtasks, which have implications for the design of systems to help people make the most of their time.
Outsider Perspectives: Crowd-Based Feedback for Writing
TLDR
Crowd-based approaches provide an outsider perspective that is timely and detailed, supplementing expert feedback in authors' work, and find a clear need for more feedback.
The Knowledge Accelerator: Big Picture Thinking in Small Pieces
TLDR
This paper instantiates the idea that a computational system can scaffold an emerging interdependent, big picture view entirely through the small contributions of individuals through a prototype system for accomplishing distributed information synthesis and evaluates its output across a variety of topics.
Supporting Collaborative Writing with Microtasks
This paper presents the MicroWriter, a system that decomposes the task of writing into three types of microtasks to produce a single report: 1) generating ideas, 2) labeling ideas to organize them,
Supporting Collaborative Writing with Microtasks
This paper presents the MicroWriter, a system that decomposes the task of writing into three types of microtasks to produce a single report: 1) generating ideas, 2) labeling ideas to organize them,
Productivity Decomposed: Getting Big Things Done with Little Microtasks
TLDR
This workshop brings together researchers in task decomposition, completion, and sourcing to discuss how intersections of research across these areas can pave the path for future research in this space.
Using the Crowd to Improve Search Result Ranking and the Search Experience
TLDR
This article explores how crowdsourcing can be used at query time to augment key stages of the search pipeline and finds that using crowd workers to support rich query understanding and result processing appears to be a more worthwhile way to make use of the crowd during search.
Bringing the Wisdom of the Crowd to an Individual by Having the Individual Assume Different Roles
TLDR
It is found that participants who were asked to assume different roles came up with more creative ideas than those who were not and suggest there is an opportunity for problem solving tools to bring the wisdom of the crowd to individuals.
...
...

References

SHOWING 1-10 OF 17 REFERENCES
Real-time crowd control of existing interfaces
TLDR
This paper presents mediation strategies for integrating the input of multiple crowd workers in real-time, evaluates these mediation strategies across several applications, and further validate Legion by exploring the space of novel applications that it enables.
Cascade: crowdsourcing taxonomy creation
TLDR
Cascade is an automated workflow that allows crowd workers to spend as little at 20 seconds each while collectively making a taxonomy, and it is shown that on three datasets its quality is 80-90% of that of experts.
Personalization via friendsourcing
TLDR
The approach to friendsourcing is to design socially enjoyable interactions that produce the desired information as a side effect in a form of crowdsourcing aimed at collecting accurate information available only to a small, socially-connected group of individuals.
TaskGenies: Automatically Providing Action Plans Helps People Complete Tasks
TLDR
It is demonstrated that it is possible for people to create action plans for others, and it is shown that it can be cost effective.
Information extraction and manipulation threats in crowd-powered systems
TLDR
This paper analyzes different forms of threats from individuals and groups of workers extracting information from crowd-powered systems or manipulating these systems' outcomes, and proposes several possible approaches to mitigating these threats.
The future of crowd work
TLDR
This paper outlines a framework that will enable crowd work that is complex, collaborative, and sustainable, and lays out research challenges in twelve major areas: workflow, task assignment, hierarchy, real-time response, synchronous collaboration, quality control, crowds guiding AIs, AIs guiding crowds, platforms, job design, reputation, and motivation.
Brainstorm, Chainstorm, Cheatstorm, Tweetstorm: new ideation strategies for distributed HCI design
TLDR
A model of ideation is presented suggesting that its value has less to do with the generation of novel ideas than the cultural influence exerted by unconventional ideas on the ideating team, and that brainstorming is more than the pooling of "invented" ideas, it involves the sharing and interpretation of concepts in unintended and (ideally) unanticipated ways.
Measuring the Crowd Within
TLDR
Measuring the crowd within: probabilistic representations within individuals finds any benefit of averaging two responses from one person would yield support for this hypothesis, which is consistent with such models that responses of many people are distributed probabilistically.
No task left behind?: examining the nature of fragmented work
TLDR
Work is found to be highly fragmented: people average little time in working spheres before switching and 57% of their working spheres are interrupted.
Notification, Disruption, and Memory: Effects of Messaging Interruptions on Memory and Performance
TLDR
It is shown that interruptions coming early during a search task are more likely to result in the user forgetting the primary task goal than interruptions that arrive later on, which has implications for the design of user interfaces and notification policies that minimize the disruptiveness of notifications.
...
...