Domain Specific Knowledge Base Construction via Crowdsourcing

Abstract

Guiding principles for selecting the best crowdsourcing methodology for a given information gathering task remain insufficient. This paper contributes additional experimental evidence and analysis to this problem. Our work focuses on a subset of crowdsourcing problems we term expert tasks—tasks that require specific domain knowledge. We experiment with crowdsourcing a knowledge base (KB) of scientists and their institutions using two methods: the first recruits experts who are likely to already know the necessary domain knowledge (using Google Adwords); the second employs non-experts who are incentivized to look up the information (using Amazon Mechanical Turk). We find that responses received through Mechanical Turk are more accurate than those received through Adwords. We analyze this result in terms of the difficulty of recruiting experts for our task and the willingness of Mechanical Turk workers to search the web for information. Our work highlights important considerations for crowdsourcing tasks requiring various types of expertise.

Extracted Key Phrases

3 Figures and Tables

Cite this paper

@inproceedings{Kobren2014DomainSK, title={Domain Specific Knowledge Base Construction via Crowdsourcing}, author={Ari Kobren and Thomas L. Logan and Siddarth Sampangi and Andrew McCallum}, year={2014} }