Learn More
Recent work has introduced CASCADE, an algorithm for creating a globally-consistent taxonomy by crowdsourcing microwork from many individuals, each of whom may see only a tiny fraction of the data (Chilton et al. 2013). While CASCADE needs only unskilled labor and produces tax-onomies whose quality approaches that of human experts, it uses significantly(More)
An ideal crowdsourcing or citizen-science system would route tasks to the most appropriate workers, but the best assignment is unclear because workers have varying skill, tasks have varying difficulty, and assigning several workers to a single task may significantly improve output quality. This paper defines a space of task routing problems, proves that(More)
Can crowdsourced annotation of training data boost performance for relation extraction over methods based solely on distant supervision? While crowdsourcing has been shown effective for many NLP tasks, previous researchers found only minimal improvement when applying the method to relation extraction. This paper demonstrates that a much larger boost is(More)
is the flagship conference of the Society for Mathematics and Computation in Music. The inaugural conference of the society took place in 2007 in Berlin. The study of mathematics and music dates back to the time of the ancient Greeks. The rise of computing and the digital age has added computation to this august tradition. MCM aims to provide a dedicated(More)
The vision of artificial intelligence (AI) is often manifested through an autonomous software module (agent) in a complex and uncertain environment. The agent is capable of thinking ahead and acting for long periods of time in accordance with its goals/objectives. It is also capable of learning and refining its understanding of the world. The agent may(More)
Crowd workers are human and thus sometimes make mistakes. In order to ensure the highest quality output, requesters often issue redundant jobs with gold test questions and sophisticated aggregation mechanisms based on expectation maximization (EM). While these methods yield accurate results in many cases, they fail on extremely difficult problems with local(More)
Requesters on crowdsourcing platforms, such as Amazon Mechanical Turk, routinely insert gold questions to verify that a worker is diligent and is providing high-quality answers. However, there is no clear understanding of when and how many gold questions to insert. Typically, requesters mix a flat 10–30% of gold questions into the task stream of every(More)
Successful online communities (e.g., Wikipedia, Yelp, and StackOverflow) can produce valuable content. However, many communities fail in their initial stages. Starting an online community is challenging because there is not enough content to attract a critical mass of active members. This paper examines methods for addressing this cold-start problem in(More)