Quizz: targeted crowdsourcing with a billion (potential) users
@article{Ipeirotis2014QuizzTC, title={Quizz: targeted crowdsourcing with a billion (potential) users}, author={Panagiotis G. Ipeirotis and Evgeniy Gabrilovich}, journal={Proceedings of the 23rd international conference on World wide web}, year={2014} }
We describe Quizz, a gamified crowdsourcing system that simultaneously assesses the knowledge of users and acquires new knowledge from them. Quizz operates by asking users to complete short quizzes on specific topics; as a user answers the quiz questions, Quizz estimates the user's competence. To acquire new knowledge, Quizz also incorporates questions for which we do not have a known answer; the answers given by competent users provide useful signals for selecting the correct answers for theseβ¦Β
Figures and Tables from this paper
151 Citations
Domain Specific Knowledge Base Construction via Crowdsourcing
- Computer Science
- 2014
This work investigates crowdsourcing a knowledge base of scientists and their institutions using two methods: the first recruits experts who are likely to already know the necessary domain knowledge (using Google Adwords); the second employs non-experts who are incentivized to look up the information (using Amazon Mechanical Turk).
Reply & Supply: Efficient crowdsourcing when workers do more than answer questions
- Computer SciencePloS one
- 2017
This work introduces algorithms to help curtail the growth bias by efficiently distributing workers between exploring new questions and addressing current questions, and demonstrates that these algorithms can efficiently explore an unbounded set of questions without losing confidence in crowd answers.
Vexation-Aware Active Learning for On-Menu Restaurant Dish Availability
- Computer ScienceKDD
- 2022
This paper studies the problem of Vexation-Aware Active Learning (VAAL), where judiciously selected questions are targeted towards improving restaurant-dish model prediction, subject to a limit on the percentage of "unsure'' answers or "dismissals'' measuring user vexation.
Predicting the quality of new contributors to the Facebook crowdsourcing system
- Computer Science
- 2014
An approach to model user trust when prior history is lacking is presented, so that it can incorporate more new usersβ contributions into crowdsourced decisions, and provide quicker feedback to new participants.
Reply & Supply: Efficient crowdsourced exploration for growing question sets and nets
- Computer ScienceArXiv
- 2016
A probability matching algorithm is introduced to curtail this bias by efficiently distributing workers between exploring new questions and addressing current questions, and can efficiently explore an unbounded set of questions while maintaining confidence in crowd answers.
Hyper Questions: Unsupervised Targeting of a Few Experts in Crowdsourcing
- Computer ScienceCIKM
- 2017
This paper focuses on an important class of answer aggregation problems in which majority voting fails and proposes the concept of hyper questions to devise effective aggregation methods, which are more likely to provide correct answers to all of the single questions included in a hyper question than non-experts.
Quality Control in Crowdsourcing
- Computer ScienceACM Comput. Surv.
- 2018
This survey derives a quality model for crowdsourcing tasks, identifies the methods and techniques that can be used to assess the attributes of the model, and the actions and strategies that help prevent and mitigate quality problems.
On the Invitation of Expert Contributors from Online Communities for Knowledge Crowdsourcing Tasks
- Computer ScienceICWE
- 2016
An experiment in expert contributors invitation is contributed where novel insights are provided on the effectiveness of direct invitations strategies, but how soliciting collaboration through communities yields, in the context of the experiment, more contributions is shown.
CrowdDQS: Dynamic Question Selection in Crowdsourcing Systems
- Computer ScienceSIGMOD Conference
- 2017
CrowdDQS is presented, a system that uses the most recent set of crowdsourced voting evidence to dynamically issue questions to workers on Amazon Mechanical Turk, and can accurately answer questions using up to 6x fewer votes than standard approaches.
References
SHOWING 1-10 OF 44 REFERENCES
Using the wisdom of the crowds for keyword generation
- Computer Science, EconomicsWWW
- 2008
This work identifies queries related to a campaign by exploiting the associations between queries and URLs as they are captured by the user's clicks, and proposes algorithms within the Markov Random Field model to solve this problem.
Quality-Based Pricing for Crowdsourced Workers
- Computer Science
- 2013
This work presents a comprehensive scheme for managing quality of crowdsourcing processes, and describes a pricing scheme that pays workers based on their expected quality, reservation wage, and expected lifetime, and accommodates measurement uncertainties and allows the workers to receive a fair wage, even in the presence of temporary incorrect estimations of quality.
Pay by the bit: an information-theoretic metric for collective human judgment
- Computer ScienceAAAI Fall Symposium: Machine Aggregation of Human Judgment
- 2012
This work considers the problem of evaluating the performance of human contributors for tasks involving answering a series of questions, each of which has a single correct answer, and proposes using multivariable information measures, such as conditional mutual information, to measure the interactions between contributors' judgments.
Steering user behavior with badges
- Computer ScienceWWW
- 2013
A formal model for reasoning about user behavior in the presence of badges is introduced and several robust design principles emerge from the framework that could potentially aid in the design of incentives for a broad range of sites.
User engagement: the network effect matters!
- Computer ScienceCIKM
- 2012
Networked user engagement is addressed by combining techniques from web analytics and mining, information retrieval evaluation, and existing works on user engagement coming from the domains of information science, multimodal human computer interaction and cognitive psychology with insights from big data with deep analysis of human behavior in the lab or through crowd-sourcing experiments.
Incentives, gamification, and game theory: an economic approach to badge design
- EconomicsEC '13
- 2013
A game-theoretic approach to badge design is taken, analyzing the incentives created by widely-used badge designs in a model where winning a badge is valued and effort is costly, and potential contributors to a site endogenously decide whether or not to participate, and how much total effort to put into their contributions to the site.
Rethinking the ESP game
- Computer ScienceCHI Extended Abstracts
- 2009
How the scoring system and the design of the ESP game can be improved to encourage users to add less predictable labels, thereby improving the quality of the collected information is discussed.
The Multidimensional Wisdom of Crowds
- Computer ScienceNIPS
- 2010
A method for estimating the underlying value of each image from (noisy) annotations provided by multiple annotators, based on a model of the image formation and annotation process, which predicts ground truth labels on both synthetic and real data more accurately than state of the art methods.
Amplifying community content creation with mixed initiative information extraction
- Computer ScienceCHI
- 2009
The potential synergy promised if two interlocking feedback cycles can be made to accelerate each other by exploiting the same edits to advance both community content creation and learning-based information extraction is explored.