"Hi! I am the Crowd Tasker" Crowdsourcing through Digital Voice Assistants

@article{Hettiachchi2020HiIA,
  title={"Hi! I am the Crowd Tasker" Crowdsourcing through Digital Voice Assistants},
  author={Danula Hettiachchi and Zhanna Sarsenbayeva and Fraser Allison and Niels van Berkel and Tilman Dingler and Gabriele Marini and Vassilis Kostakos and Jorge Gonçalves},
  journal={Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems},
  year={2020}
}
Inspired by the increasing prevalence of digital voice assistants, we demonstrate the feasibility of using voice interfaces to deploy and complete crowd tasks. We have developed Crowd Tasker, a novel system that delivers crowd tasks through a digital voice assistant. In a lab study, we validate our proof-of-concept and show that crowd task performance through a voice assistant is comparable to that of a web interface for voice-compatible and voice-based crowd tasks for native English speakers… 

Figures and Tables from this paper

How Context Influences Cross-Device Task Acceptance in Crowd Work
TLDR
This work investigates workers’ willingness to accept different types of crowd tasks presented on three device types in different scenarios of varying location, time and social context, and shows how different contextual factors such as location, social context and time influence workers decision to accept a task on a given device.
CrowdCog
TLDR
This work presents 'CrowdCog', an online dynamic system that performs both task assignment and task recommendations, by relying on fast-paced online cognitive tests to estimate worker performance across a variety of tasks, and pave the way for the use of quick Cognitive tests to provide robust recommendations and assignments to crowd workers.
CrowdCog: A Cognitive Skill based System for Heterogeneous Task Assignment and Recommendation in Crowdsourcing
TLDR
This work presents 'CrowdCog', an online dynamic system that performs both task assignment and task recommendations, by relying on fast-paced online cognitive tests to estimate worker performance across a variety of tasks, and pave the way for the use of quick Cognitive tests to provide robust recommendations and assignments to crowd workers.
Understanding User Perceptions of Proactive Smart Speakers
TLDR
A prototype of proactive speakers that verbally engage users at opportune moments are designed and contribute to the design of proactive application scenarios and voice-based experience sampling studies.
Firefox Voice: An Open and Extensible Voice Assistant Built Upon the Web
TLDR
Firefox Voice is introduced, a novel voice assistant built on the open web ecosystem with an aim to expand access to information available via voice, and how Firefox Voice enables the development of novel, open web-powered voice-driven experiences is described.
AFFORCE: Actionable Framework for Designing Crowdsourcing Experiences for Older Adults
TLDR
A unique framework for designing attractive and engaging crowdsourcing systems for older adults is proposed, called AFFORCE (Actionable Framework For Crowdsourcing Experiences), which is based on the experience with the design of crowdsourcing Systems for Older adults in exploratory cases and studies, related work, as well as the intersection of older adults’ use of ICT, crowdsourcing and citizen science.
A Survey on Task Assignment in Crowdsourcing
TLDR
This survey reviews task assignment methods that address: heterogeneous task assignment, question assignment, and plurality problems in crowdsourcing, and how crowdsourcing platforms and other stakeholders can benefit from them.
What Could Possibly Go Wrong When Interacting with Proactive Smart Speakers? A Case Study Using an ESM Application
TLDR
It is found that, even for answering simple ESMs, interaction errors occur frequently and can hamper the usability of proactive speakers and user experience and multiple facets of VUIs that can be improved in terms of the timing of speech are identified.
Mobilizing Crowdwork:A Systematic Assessment of the Mobile Usability of HITs
TLDR
A taxonomy of characteristics that defines the mobile usability of HITs for smartphone devices, highlighting the observed practices and preferences around mobile crowdwork and the implications of the taxonomy for accessibly and ethically mobilizing crowdwork not only within the context of smartphones, but beyond them.
To Trust or Not To Trust: How a Conversational Interface Affects Trust in a Decision Support System
TLDR
The findings show that the conversational interface was significantly more effective in building user trust and satisfaction in the online housing recommendation system when compared to the conventional web interface.

References

SHOWING 1-10 OF 69 REFERENCES
Crowdsourcing for Speech Processing: Applications to Data Collection, Transcription and Assessment
TLDR
This introduction to crowdsourcing as a means of rapidly processing speech data offers speech researchers the hope that they can spend much less time dealing with the data gathering/annotation bottleneck, leaving them to focus on the scientific issues.
Enabling Creative Crowd Work through Smart Speakers
TLDR
The affordances of digital voice assistants or smart speakers could be utilised to create a novel crowdsourcing platform that deliver crowd tasks through voice with particular focus on creative tasks.
BSpeak: An Accessible Voice-based Crowdsourcing Marketplace for Low-Income Blind People
TLDR
A mixed-methods analysis revealed severe accessibility barriers in MTurk due to the absence of landmarks, unlabeled UI elements, and improper use of HTML headings, and recommendations for designing crowdsourcing marketplaces for low-income blind people in resource-constrained settings.
ReCall: Crowdsourcing on Basic Phones to Financially Sustain Voice Forums
TLDR
ReCall, a crowdsourcing marketplace accessible via phone calls where low-income rural residents vocally transcribe audio files to gain free airtime to participate in voice forums as well as to earn money, is presented.
CrowdPickUp: Crowdsourcing Task Pickup in the Wild
TLDR
The findings show that workers of CrowdPickUp contributed data of comparable quality to previously presented crowdsourcing deployments while at the same time allowing for a wide breadth of tasks to be deployed.
Pick-a-crowd: tell me what you like, and i'll tell you what to do
TLDR
This paper proposes and extensively evaluate a different Crowdsourcing approach based on a push methodology that carefully selects which workers should perform a given task based on worker profiles extracted from social networks and shows that this approach consistently yield better results than usual pull strategies.
Older Adults and Crowdsourcing
TLDR
An Android TV application using Amara API to retrieve subtitles for TEDx talks which allows the participants to detect and categorize errors to support the quality of the translation and transcription processes.
Understanding User Satisfaction with Intelligent Assistants
TLDR
A user study designed to measure user satisfaction over a range of typical scenarios of use is described, finding that the notion of satisfaction varies across different scenarios, and that overall task-level satisfaction cannot be reduced to query- level satisfaction alone.
Effect of Cognitive Abilities on Crowdsourcing Task Performance
TLDR
This paper proposes a method that considers workers’ cognitive ability to predict their suitability for a wide range of crowdsourcing tasks, and demonstrates a significant improvement in the expected overall task accuracy.
Respeak: A Voice-based, Crowd-powered Speech Transcription System
TLDR
Respeak is presented - a voice-based, crowd-powered system that capitalizes on the strengths of crowdsourcing and automatic speech recognition to transcribe audio files containing languages spoken in developing countries and regional accents of well-represented languages.
...
1
2
3
4
5
...