Rethinking the recommender research ecosystem: reproducibility, openness, and LensKit

@inproceedings{Ekstrand2011RethinkingTR,
  title={Rethinking the recommender research ecosystem: reproducibility, openness, and LensKit},
  author={Michael D. Ekstrand and Michael Ludwig and J. Konstan and J. Riedl},
  booktitle={RecSys '11},
  year={2011}
}
Recommender systems research is being slowed by the difficulty of replicating and comparing research results. Published research uses various experimental methodologies and metrics that are difficult to compare. It also often fails to sufficiently document the details of proposed algorithms or the evaluations employed. Researchers waste time reimplementing well-known algorithms, and the new implementations may miss key details from the original algorithm or its subsequent refinements. When… Expand
Towards reproducibility in recommender-systems research
Improving Accountability in Recommender Systems Research Through Reproducibility
Comparative recommender system evaluation: benchmarking recommendation frameworks
Mix and Rank: A Framework for Benchmarking Recommender Systems
OpenRec: A Modular Framework for Extensible and Adaptable Recommendation Algorithms
Toward identification and adoption of best practices in algorithmic recommender systems research
...
1
2
3
4
5
...

References

SHOWING 1-4 OF 4 REFERENCES
Evaluating collaborative filtering over time
Evaluating the dynamic properties of recommendation algorithms
Netflix update: Try this at home
  • http://sifter.org/ ̃simon/journal/20061211.html,
  • 2006