Guiding principles for selecting the best crowdsourcing methodology for a given information gathering task remain insufficient. This paper contributes additional experimental evidence and analysis to this problem. Our work focuses on a subset of crowdsourcing problems we term expert tasks—tasks that require specific domain knowledge. We experiment with crowdsourcing a knowledge base (KB) of scientists and their institutions using two methods: the first recruits experts who are likely to already know the necessary domain knowledge (using Google Adwords); the second employs non-experts who are incentivized to look up the information (using Amazon Mechanical Turk). We find that responses received through Mechanical Turk are more accurate than those received through Adwords. We analyze this result in terms of the difficulty of recruiting experts for our task and the willingness of Mechanical Turk workers to search the web for information. Our work highlights important considerations for crowdsourcing tasks requiring various types of expertise.