• Corpus ID: 9684400

Overview of EIREX 2012: Social Media

@article{Urbano2013OverviewOE,
  title={Overview of EIREX 2012: Social Media},
  author={Juli{\'a}n Urbano and M{\'o}nica Marrero and Diego Mart{\'i}n and Jorge Luis Morato Lara},
  journal={ArXiv},
  year={2013},
  volume={abs/1302.1178}
}
The third Information Retrieval Education through EXperimentation track (EIREX 2012) was run at the University Carlos III of Madrid, during the 2012 spring semester. EIREX 2012 is the third in a series of experiments designed to foster new Information Retrieval (IR) education methodologies and resources, with the specific goal of teaching undergraduate IR courses from an experimental perspective. For an introduction to the motivation behind the EIREX experiments, see the first sections of… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 11 REFERENCES
Overview of EIREX 2010: Computing
TLDR
This overview paper summarizes the results of the EIREX 2010 track, focusing on the creation of the test collection and the analysis to assess its reliability.
Overview of EIREX 2011: Crowdsourcing
TLDR
This overview paper summarizes the results of the EIREX 2011 track, focusing on the creation of the test collection and the analysis to assess its reliability.
Bringing undergraduate students closer to a real-world information retrieval setting: methodology and resources
TLDR
A pilot experiment to update the program of an Information Retrieval course for Computer Science undergraduates shows that this methodology is indeed reliable and feasible, and so the students plan on improving and keep using it in the next years, leading to a public repository of resources for Information retrieval courses.
Information Retrieval Meta-Evaluation: Challenges and Opportunities in the Music Domain
TLDR
A survey of past meta-evaluation work in the context of Text Information Retrieval argues that the music community still needs to address various issues concerning the evaluation of music systems and the IR cycle, pointing out directions for further research and proposals in this line.
Crawling the web for structured documents
TLDR
This demo describes a distributed and focused web crawler for any kind of structured documents, and it is shown how to exploit general-purpose resources to gather large amounts of real-world structured documents off the Web.
The University Carlos III of Madrid at TREC 2011 Crowdsourcing Track
TLDR
The participation of the uc3m team in both tasks of the TREC 2011 Crowdsourcing Track is described, and according to the NIST gold labels, the runs performed very well in both task, ranking at the top for most measures.
The Philosophy of Information Retrieval Evaluation
TLDR
The fundamental assumptions and appropriate uses of the Cranfield paradigm, especially as they apply in the context of the evaluation conferences, are reviewed.
Variations in relevance judgments and the measurement of retrieval effectiveness
TLDR
Very high correlations were found among the rankings of systems produced using diAerent relevance judgment sets, indicating that the comparative evaluation of retrieval performance is stable despite substantial diAerences in relevance judgments, and thus reaArm the use of the TREC collections as laboratory tools.
How reliable are the results of large-scale information retrieval experiments?
TLDR
A detailed empirical investigation of the TREC results shows that the measured relative performance of systems appears to be reliable, but that recall is overestimated: it is likely that many relevant documents have not been found.
TREC: Experiment and evaluation in information retrieval
TLDR
One of recommendation of the book that you need to read is shown, which is a kind of precious book written by an experienced author and the reasonable reasons why you should read this book are shown.
...
...