• Corpus ID: 503719

Overview of EIREX 2010: Computing

@article{Urbano2012OverviewOE,
  title={Overview of EIREX 2010: Computing},
  author={Juli{\'a}n Urbano and M{\'o}nica Marrero and Diego Mart{\'i}n and Jorge Luis Morato Lara},
  journal={ArXiv},
  year={2012},
  volume={abs/1201.0274}
}
The first Information Retrieval Education through Experimentation track (EIREX 2010) was run at the University Carlos III of Madrid, during the 2010 spring semester. EIREX 2010 is the first in a series of experiments designed to foster new Information Retrieval (IR) education methodologies and resources, with the specific goal of teaching undergraduate IR courses from an experimental perspective. For an introduction to the motivation behind the EIREX experiments, see the first sections of… 

Figures and Tables from this paper

Overview of EIREX 2012: Social Media
TLDR
This overview paper summarizes the results of the EIREX 2012 track, focusing on the creation of the test collection and the analysis to assess its reliability.
Overview of EIREX 2011: Crowdsourcing
TLDR
This overview paper summarizes the results of the EIREX 2011 track, focusing on the creation of the test collection and the analysis to assess its reliability.

References

SHOWING 1-10 OF 12 REFERENCES
Bringing undergraduate students closer to a real-world information retrieval setting: methodology and resources
TLDR
A pilot experiment to update the program of an Information Retrieval course for Computer Science undergraduates shows that this methodology is indeed reliable and feasible, and so the students plan on improving and keep using it in the next years, leading to a public repository of resources for Information retrieval courses.
Information Retrieval Meta-Evaluation: Challenges and Opportunities in the Music Domain
TLDR
A survey of past meta-evaluation work in the context of Text Information Retrieval argues that the music community still needs to address various issues concerning the evaluation of music systems and the IR cycle, pointing out directions for further research and proposals in this line.
Crawling the web for structured documents
TLDR
This demo describes a distributed and focused web crawler for any kind of structured documents, and it is shown how to exploit general-purpose resources to gather large amounts of real-world structured documents off the Web.
The Philosophy of Information Retrieval Evaluation
TLDR
The fundamental assumptions and appropriate uses of the Cranfield paradigm, especially as they apply in the context of the evaluation conferences, are reviewed.
Variations in relevance judgments and the measurement of retrieval effectiveness
TLDR
Very high correlations were found among the rankings of systems produced using diAerent relevance judgment sets, indicating that the comparative evaluation of retrieval performance is stable despite substantial diAerences in relevance judgments, and thus reaArm the use of the TREC collections as laboratory tools.
On the reliability of information retrieval metrics based on graded relevance
  • T. Sakai
  • Computer Science
    Inf. Process. Manag.
  • 2007
How reliable are the results of large-scale information retrieval experiments?
TLDR
A detailed empirical investigation of the TREC results shows that the measured relative performance of systems appears to be reliable, but that recall is overestimated: it is likely that many relevant documents have not been found.
TREC: Experiment and evaluation in information retrieval
TLDR
One of recommendation of the book that you need to read is shown, which is a kind of precious book written by an experienced author and the reasonable reasons why you should read this book are shown.
Evaluating Evaluation Measure Stability
TLDR
A novel way of examining the accuracy of the evaluation measures commonly used in information retrieval experiments is presented, which validates several of the rules-of-thumb experimenters use and challenges other beliefs, such as the common evaluation measures are equally reliable.
Table 6. Mean, standard deviation and maximum of the percentage increments in NDCG@100, AP@100, P@10 and RR over 1,000 random combinations of trels, as a function of pool size
    ...
    ...