Michael Kluck

Learn More
Topic creation and relevance assessment are considered as crucial components of the evaluation process in Information Retrieval (IR). In the context of the Cross-Language Evaluation Forum (CLEF), the focus lies on evaluating multilingual functions of IR systems. Therefore, topics are generated in various languages and judging the documents delivered by the(More)
We describe the creation of an infrastructure for the testing of cross-language text retrieval systems within the context of the Text REtrieval Conferences (TREC) organised by the US National Institute of Standards and Technology (NIST). The approach adopted and the issues that had to be taken into consideration when building a multilingual test suite and(More)
The development of the evaluation of domain-specific cross-language information retrieval (CLIR) is shown in the context of the Cross-Language Evaluation Forum (CLEF) campaigns from 2000 to 2003. The preconditions and the usable data and additionally available instruments are described. The main goals of this task of CLEF are to allow the evaluation of(More)