Toward identification and adoption of best practices in algorithmic recommender systems research

@inproceedings{Konstan2013TowardIA,
  title={Toward identification and adoption of best practices in algorithmic recommender systems research},
  author={J. Konstan and Gediminas Adomavicius},
  booktitle={RepSys '13},
  year={2013}
}
One of the goals of data-intensive research, in any field of study, is to grow knowledge over time as additional studies contribute to collective knowledge and understanding. Two steps are critical to making such research cumulative -- the individual research results need to be documented thoroughly and conducted on data made available to others (to allow replication and meta-analysis), and the individual research needs to be carried out correctly, following standards and best practices for… Expand
Improving Accountability in Recommender Systems Research Through Reproducibility
TLDR
This work argues that, by facilitating reproducibility of recommender systems experimentation, it indirectly address the issues of accountability and transparency in recommender system research from the perspectives of practitioners, designers, and engineers aiming to assess the capabilities of published research works. Expand
Towards Recommender Engineering: tools and experiments for identifying recommender differences
TLDR
The LensKit toolkit for conducting experiments on a wide variety of recommender algorithms and data sets under different experimental conditions, along with new developments in object-oriented software configuration to support this toolkit, and experiments on the configuration options of widely-used algorithms to provide guidance on tuning and configuring them are made. Expand
Towards reproducibility in recommender-systems research
TLDR
The recommender-system community needs to survey other research fields and learn from them, find a common understanding of reproducibility, identify and understand the determinants that affect reproduCibility, conduct more comprehensive experiments, and establish best-practice guidelines for recommender -systems research. Expand
Elliot: A Comprehensive and Rigorous Framework for Reproducible Recommender Systems Evaluation
TLDR
Elliot is a comprehensive recommendation framework that aims to run and reproduce an entire experimental pipeline by processing a simple configuration file and optimizes hyperparameters for several recommendation algorithms. Expand
An explanation-based approach for experiment reproducibility in recommender systems
TLDR
Within the use of the same library an explanation-based approach can be used to assist in the reproducibility of experiments, and the results show that it is both practical and effective. Expand
Research-paper recommender systems: a literature survey
TLDR
Several actions could improve the research landscape: developing a common evaluation framework, agreement on the information to include in research papers, a stronger focus on non-accuracy aspects and user modeling, a platform for researchers to exchange information, and an open-source framework that bundles the available recommendation approaches. Expand
Report on the workshop on reproducibility and replication in recommender systems evaluation (RepSys)
TLDR
The need for a clear solution to reproducibility and replication remains largely unmet, which motivates the main questions addressed in the present workshop. Expand
Reproducibility of Experiments in Recommender Systems Evaluation
TLDR
This paper compares well known recommendation algorithms, using the same dataset, metrics and overall settings, the results of which point to result differences across frameworks with the exact same settings. Expand
Reproduction of Experiments in Recommender Systems Evaluation Based on Explanations
TLDR
It is shown that it is challenging to reproduce results using a different library but with the use of the same library an explanation based approach can be used to assist in the reproducibility of experiments. Expand
Evaluating Recommender Systems: A Systemized Quantitative Survey
Replicating the results of the recommender system's evaluation is one of the main concerns in the area. This paper discusses this issue from different angles: 1) It investigates the uniformity ofExpand
...
1
2
3
...

References

SHOWING 1-10 OF 16 REFERENCES
Rethinking the recommender research ecosystem: reproducibility, openness, and LensKit
TLDR
The utility of LensKit is demonstrated by replicating and extending a set of prior comparative studies of recommender algorithms, and a question recently raised by a leader in the recommender systems community on problems with error-based prediction evaluation is investigated. Expand
Evaluating Recommendation Systems
TLDR
This paper discusses how to compare recommenders based on a set of properties that are relevant for the application, and focuses on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms. Expand
Recommender Systems - An Introduction
TLDR
An overview of approaches to developing state-of-the-art recommender systems, including current algorithmic approaches for generating personalized buying proposals, such as collaborative and content-based filtering, as well as more interactive and knowledge-based approaches. Expand
MyMediaLite: a free recommender system library
TLDR
The library addresses two common scenarios in collaborative filtering: rating prediction and item prediction from positive-only implicit feedback, and contains methods for real-time updates and loading/storing of already trained recommender models. Expand
Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions
This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three mainExpand
Evaluating collaborative filtering recommender systems
TLDR
The key decisions in evaluating collaborative filtering recommender systems are reviewed: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole. Expand
GroupLens: applying collaborative filtering to Usenet news
TLDR
The combination of high volume and personal taste made Usenet news a promising candidate for collaborative filtering and the potential predictive utility for Usenets news was very high. Expand
GroupLens: an open architecture for collaborative filtering of netnews
TLDR
GroupLens is a system for collaborative filtering of netnews, to help people find articles they will like in the huge stream of available articles, and protect their privacy by entering ratings under pseudonyms, without reducing the effectiveness of the score prediction. Expand
Social information filtering: algorithms for automating “word of mouth”
TLDR
The implementation of a networked system called Ringo, which makes personalized recommendations for music albums and artists, and four different algorithms for making recommendations by using social information filtering were tested and compared. Expand
Recommending and evaluating choices in a virtual community of use
TLDR
A general history-of-use method that automates a social method for informing choice and report on how it fares in the context of a fielded test case: the selection of videos from a large set of videos. Expand
...
1
2
...