Crowdsourcing for relevance evaluation

@article{Alonso2008CrowdsourcingFR,
  title={Crowdsourcing for relevance evaluation},
  author={Omar Alonso and D. E. Rose and Benjamin Stewart},
  journal={SIGIR Forum},
  year={2008},
  volume={42},
  pages={9-15}
}
Relevance evaluation is an essential part of the development and maintenance of information retrieval systems. Yet traditional evaluation approaches have several limitations; in particular, conducting new editorial evaluations of a search system can be very expensive. We describe a new approach to evaluation called TERC, based on the crowdsourcing paradigm, in which many online users, drawn from a large community, each performs a small evaluation task. 
Obtaining High-Quality Relevance Judgments Using Crowdsourcing
Managing the Quality of Large-Scale Crowdsourcing
Crowdsourcing Document Relevance Assessment with Mechanical Turk
Real-time quality control for crowdsourcing relevance evaluation
Exploiting Document Content for Efficient Aggregation of Crowdsourcing Votes
...
1
2
3
4
5
...