Learn More
Recent work has introduced CASCADE, an algorithm for creating a globally-consistent taxonomy by crowdsourcing microwork from many individuals, each of whom may see only a tiny fraction of the data (Chilton et al. 2013). While CASCADE needs only unskilled labor and produces tax-onomies whose quality approaches that of human experts, it uses significantly(More)
An ideal crowdsourcing or citizen-science system would route tasks to the most appropriate workers, but the best assignment is unclear because workers have varying skill, tasks have varying difficulty, and assigning several workers to a single task may significantly improve output quality. This paper defines a space of task routing problems, proves that(More)
Can crowdsourced annotation of training data boost performance for relation extraction over methods based solely on distant supervision? While crowdsourcing has been shown effective for many NLP tasks, previous researchers found only minimal improvement when applying the method to relation extraction. This paper demonstrates that a much larger boost is(More)
is the flagship conference of the Society for Mathematics and Computation in Music. The inaugural conference of the society took place in 2007 in Berlin. The study of mathematics and music dates back to the time of the ancient Greeks. The rise of computing and the digital age has added computation to this august tradition. MCM aims to provide a dedicated(More)
The vision of artificial intelligence (AI) is often manifested through an autonomous software module (agent) in a complex and uncertain environment. The agent is capable of thinking ahead and acting for long periods of time in accordance with its goals/objectives. It is also capable of learning and refining its understanding of the world. The agent may(More)
Requesters on crowdsourcing platforms, such as Amazon Mechanical Turk, routinely insert gold questions to verify that a worker is diligent and is providing high-quality answers. However, there is no clear understanding of when and how many gold questions to insert. Typically, requesters mix a flat 10–30% of gold questions into the task stream of every(More)
Crowd workers are human and thus sometimes make mistakes. In order to ensure the highest quality output, requesters often issue redundant jobs with gold test questions and sophisticated aggregation mechanisms based on expectation maximization (EM). While these methods yield accurate results in many cases, they fail on extremely difficult problems with local(More)
Mainstream crowdwork platforms treat microtasks as indivisible units; however, in this article, we propose that there is value in reexamining this assumption. We argue that crowdwork platforms can improve their value proposition for all stakeholders by supporting subcontracting within microtasks. After describing the value proposition of subcontracting, we(More)
Artificial intelligence (AI) is widely expected to reduce the need for human labor in a variety of sectors. Workers on virtual labor marketplaces accelerate this process by generating training data for AI systems. We propose a new model where workers earn ownership of trained AI systems, allowing them to draw a long-term royalty from a tool that replaces(More)