Corpus ID: 226254069

On the impact of predicate complexity in crowdsourced classification tasks

@article{Ramrez2020OnTI,
  title={On the impact of predicate complexity in crowdsourced classification tasks},
  author={J. Ram{\'i}rez and M. B{\'a}ez and F. Casati and Luca Cernuzzi and Boualem Benatallah and E. Taran and V. Malanina},
  journal={ArXiv},
  year={2020},
  volume={abs/2011.02891}
}
  • J. Ramรญrez, M. Bรกez, +4 authors V. Malanina
  • Published 2020
  • Computer Science
  • ArXiv
  • This paper explores and offers guidance on a specific and relevant problem in task design for crowdsourcing: how to formulate a complex question used to classify a set of items. In micro-task markets, classification is still among the most popular tasks. We situate our work in the context of information retrieval and multi-predicate classification, i.e., classifying a set of items based on a set of conditions. Our experiments cover a wide range of tasks and domains, and also consider crowdโ€ฆย CONTINUE READING

    Figures and Tables from this paper

    References

    SHOWING 1-10 OF 56 REFERENCES
    Understanding the Impact of Text Highlighting in Crowdsourcing Tasks
    • 8
    • PDF
    A taxonomy of microtasks on the web
    • 90
    • PDF
    Combining Crowd and Machines for Multi-predicate Item Screening
    • 9
    • PDF
    Crowdsourced dataset to study the generation and impact of text highlighting in classification tasks
    • 5
    Modeling Task Complexity in Crowdsourcing
    • 41
    • PDF
    Argonaut: Macrotask Crowdsourcing for Complex Data Processing
    • 66
    • PDF
    All Those Wasted Hours: On Task Abandonment in Crowdsourcing
    • 23
    • PDF
    Clarity is a Worthwhile Quality: On the Role of Task Clarity in Microtask Crowdsourcing
    • 51
    • PDF
    CrowdScreen: algorithms for filtering data with humans
    • 230
    • PDF