On the impact of predicate complexity in crowdsourced classification tasks
@article{Ramrez2020OnTI, title={On the impact of predicate complexity in crowdsourced classification tasks}, author={J. Ram{\'i}rez and M. B{\'a}ez and F. Casati and Luca Cernuzzi and Boualem Benatallah and E. Taran and V. Malanina}, journal={ArXiv}, year={2020}, volume={abs/2011.02891} }
This paper explores and offers guidance on a specific and relevant problem in task design for crowdsourcing: how to formulate a complex question used to classify a set of items. In micro-task markets, classification is still among the most popular tasks. We situate our work in the context of information retrieval and multi-predicate classification, i.e., classifying a set of items based on a set of conditions. Our experiments cover a wide range of tasks and domains, and also consider crowdโฆย CONTINUE READING
References
SHOWING 1-10 OF 56 REFERENCES
Understanding the Impact of Text Highlighting in Crowdsourcing Tasks
- Computer Science
- AAAI 2019
- 2019
- 8
- PDF
Combining Crowd and Machines for Multi-predicate Item Screening
- Computer Science
- Proc. ACM Hum. Comput. Interact.
- 2018
- 9
- PDF
Crowdsourced dataset to study the generation and impact of text highlighting in classification tasks
- Medicine, Computer Science
- BMC Research Notes
- 2019
- 5
Argonaut: Macrotask Crowdsourcing for Complex Data Processing
- Computer Science
- Proc. VLDB Endow.
- 2015
- 66
- PDF
Clarity is a Worthwhile Quality: On the Role of Task Clarity in Microtask Crowdsourcing
- Computer Science
- HT
- 2017
- 51
- PDF
Understanding Workers, Developing Effective Tasks, and Enhancing Marketplace Dynamics: A Study of a Large Crowdsourcing Marketplace
- Computer Science
- Proc. VLDB Endow.
- 2017
- 43
- PDF