• Corpus ID: 2580067

Overview of EIREX 2011: Crowdsourcing

@article{Urbano2012OverviewOE,
  title={Overview of EIREX 2011: Crowdsourcing},
  author={Juli{\'a}n Urbano and Diego Mart{\'i}n and M{\'o}nica Marrero and Jorge Luis Morato Lara},
  journal={ArXiv},
  year={2012},
  volume={abs/1203.0518}
}
The second Information Retrieval Education through EXperimentation track (EIREX 2011) was run at the University Carlos III of Madrid, during the 2011 spring semester. EIREX 2011 is the second in a series of experiments designed to foster new Information Retrieval (IR) education methodologies and resources, with the specific goal of teaching undergraduate IR courses from an experimental perspective. For an introduction to the motivation behind the EIREX experiments, see the first sections of… 

Figures and Tables from this paper

Overview of EIREX 2012: Social Media
TLDR
This overview paper summarizes the results of the EIREX 2012 track, focusing on the creation of the test collection and the analysis to assess its reliability.

References

SHOWING 1-10 OF 11 REFERENCES
Overview of EIREX 2010: Computing
TLDR
This overview paper summarizes the results of the EIREX 2010 track, focusing on the creation of the test collection and the analysis to assess its reliability.
Information Retrieval Meta-Evaluation: Challenges and Opportunities in the Music Domain
TLDR
A survey of past meta-evaluation work in the context of Text Information Retrieval argues that the music community still needs to address various issues concerning the evaluation of music systems and the IR cycle, pointing out directions for further research and proposals in this line.
Crawling the web for structured documents
TLDR
This demo describes a distributed and focused web crawler for any kind of structured documents, and it is shown how to exploit general-purpose resources to gather large amounts of real-world structured documents off the Web.
The Philosophy of Information Retrieval Evaluation
TLDR
The fundamental assumptions and appropriate uses of the Cranfield paradigm, especially as they apply in the context of the evaluation conferences, are reviewed.
How reliable are the results of large-scale information retrieval experiments?
TLDR
A detailed empirical investigation of the TREC results shows that the measured relative performance of systems appears to be reliable, but that recall is overestimated: it is likely that many relevant documents have not been found.
TREC: Experiment and evaluation in information retrieval
TLDR
One of recommendation of the book that you need to read is shown, which is a kind of precious book written by an experienced author and the reasonable reasons why you should read this book are shown.
Bringing undergraduate students closer to a real-world information retrieval setting: methodology and resources
TLDR
A pilot experiment to update the program of an Information Retrieval course for Computer Science undergraduates shows that this methodology is indeed reliable and feasible, and so the students plan on improving and keep using it in the next years, leading to a public repository of resources for Information retrieval courses.
Variations in relevance judgments and the measurement of retrieval effectiveness
TLDR
Very high correlations were found among the rankings of systems produced using diAerent relevance judgment sets, indicating that the comparative evaluation of retrieval performance is stable despite substantial diAerences in relevance judgments, and thus reaArm the use of the TREC collections as laboratory tools.
Table 5 Mean and maximum increments observed in NDCG@100, AP@100, P@10 and RR, over all 15 systems, as a function of pool size
  • Table 5 Mean and maximum increments observed in NDCG@100, AP@100, P@10 and RR, over all 15 systems, as a function of pool size
9% 13.86% 5.39% 21.38% 5.77% 26.08% 40 → 45 2.19% 5.73% 3.14% 5.08% 5
  • 9% 13.86% 5.39% 21.38% 5.77% 26.08% 40 → 45 2.19% 5.73% 3.14% 5.08% 5
...
...