Domain Specific Knowledge Base Construction via Crowdsourcing

Abstract

Guiding principles for selecting the best crowdsourcing methodology for a given information gathering task remain insufficient. This paper contributes additional experimental evidence and analysis to this problem. Our work focuses on a subset of crowdsourcing problems we term expert tasks—tasks that require specific domain knowledge. We experiment with crowdsourcing a knowledge base (KB) of scientists and their institutions using two methods: the first recruits experts who are likely to already know the necessary domain knowledge (using Google Adwords); the second employs non-experts who are incentivized to look up the information (using Amazon Mechanical Turk). We find that responses received through Mechanical Turk are more accurate than those received through Adwords. We analyze this result in terms of the difficulty of recruiting experts for our task and the willingness of Mechanical Turk workers to search the web for information. Our work highlights important considerations for crowdsourcing tasks requiring various types of expertise.

Extracted Key Phrases

3 Figures and Tables

Showing 1-10 of 15 references

Dataset of 2200 faculty in 50 top us computer science programs

  • Alexandra Papoutsaki, Hua Guo, +5 authors Jeff Huang
  • 2014
1 Excerpt

Toward a crowdsourcing platform for knowledge base construction

  • Kazuhiro Kuwabara, Naoki Ohta
  • 2014

Human-powered Data Management

  • G Aditya, Parameswaran
  • 2013