The Dagstuhl Perspectives Workshop on Performance Modeling and Prediction

  title={The Dagstuhl Perspectives Workshop on Performance Modeling and Prediction},
  author={N. Ferro and N. Fuhr and G. Grefenstette and J. Konstan and P. Castells and E. Daly and Thierry Declerck and Michael D. Ekstrand and Werner Geyer and J. Gonzalo and T. Kuflik and Krister Lind{\'e}n and B. Magnini and Jian-Yun Nie and R. Perego and Bracha Shapira and I. Soboroff and N. Tintarev and Karin M. Verspoor and M. Willemsen and J. Zobel},
  journal={SIGIR Forum},
This paper reports the findings of the Dagstuhl Perspectives Workshop 17442 on performance modeling and prediction in the domains of Information Retrieval, Natural language Processing and Recommender Systems. We present a framework for further research, which identifies five major problem areas: understanding measures, performance analysis, making underlying assumptions explicit, identifying application features determining performance, and the development of prediction models describing the… Expand
Report on GLARE 2018: 1st Workshop on Generalization in Information Retrieval
Causality, prediction and improvements that (don’t) add up
Using Collection Shards to Study Retrieval Performance Effect Sizes
Assessing ranking metrics in top-N recommendation
The Information Retrieval Group at the University of Duisburg-Essen
How do interval scales help us with better understanding IR evaluation measures?
CENTRE@CLEF2019: Overview of the Replicability and Reproducibility Tasks