Toward identification and adoption of best practices in algorithmic recommender systems research

  title={Toward identification and adoption of best practices in algorithmic recommender systems research},
  author={Joseph A. Konstan and Gediminas Adomavicius},
  booktitle={RepSys '13},
One of the goals of data-intensive research, in any field of study, is to grow knowledge over time as additional studies contribute to collective knowledge and understanding. Two steps are critical to making such research cumulative -- the individual research results need to be documented thoroughly and conducted on data made available to others (to allow replication and meta-analysis), and the individual research needs to be carried out correctly, following standards and best practices for… 

Figures and Tables from this paper

Progress in Recommender Systems Research: Crisis? What Crisis?

Scholars in algorithmic recommender systems research have developed a largely standardized scientific method, where progress is claimed by showing that a new algorithm outperforms existing ones on or

Improving Accountability in Recommender Systems Research Through Reproducibility

This work argues that, by facilitating reproducibility of recommender system experimentation, it indirectly address the issues of accountability and transparency in recommender systems research from the perspectives of practitioners, designers, and engineers aiming to assess the capabilities of published research works.

BARS: Towards Open Benchmarking for Recommender Systems

This initiative project presents an initiative project aimed for open benchmarking for recommender systems, which sets up a standardized benchmarking pipeline for reproducible research, which integrates all the details about datasets, source code, hyper-parameter settings, running logs, and evaluation results.

Towards Recommender Engineering: tools and experiments for identifying recommender differences

The LensKit toolkit for conducting experiments on a wide variety of recommender algorithms and data sets under different experimental conditions, along with new developments in object-oriented software configuration to support this toolkit, and experiments on the configuration options of widely-used algorithms to provide guidance on tuning and configuring them are made.

Where Do We Go From Here? Guidelines For Offline Recommender Evaluation

This paper examines four larger issues in recommender system research regarding uncertainty estimation, generalization, hyperparameter optimization and dataset pre-processing in more detail to arrive at a set of guidelines and presents a TrainRec, a lightweight and flexible toolkit for offline training and evaluation of recommender systems that implements these guidelines.

Towards reproducibility in recommender-systems research

The recommender-system community needs to survey other research fields and learn from them, find a common understanding of reproducibility, identify and understand the determinants that affect reproduCibility, conduct more comprehensive experiments, and establish best-practice guidelines for recommender -systems research.

A Guideline-Based Approach for Assisting with the Reproducibility of Experiments in Recommender Systems Evaluation

It can be difficult to reproduce results if certain settings are missing, thus resulting in more evaluation cycles required to identify the optimal settings, according to the guidelines-based approach proposed.

Elliot: A Comprehensive and Rigorous Framework for Reproducible Recommender Systems Evaluation

Elliot is a comprehensive recommendation framework that aims to run and reproduce an entire experimental pipeline by processing a simple configuration file and optimizes hyperparameters for several recommendation algorithms.

An explanation-based approach for experiment reproducibility in recommender systems

Within the use of the same library an explanation-based approach can be used to assist in the reproducibility of experiments, and the results show that it is both practical and effective.

Research-paper recommender systems: a literature survey

Several actions could improve the research landscape: developing a common evaluation framework, agreement on the information to include in research papers, a stronger focus on non-accuracy aspects and user modeling, a platform for researchers to exchange information, and an open-source framework that bundles the available recommendation approaches.



Rethinking the recommender research ecosystem: reproducibility, openness, and LensKit

The utility of LensKit is demonstrated by replicating and extending a set of prior comparative studies of recommender algorithms, and a question recently raised by a leader in the recommender systems community on problems with error-based prediction evaluation is investigated.

Recommender Systems Handbook

This handbook illustrates how recommender systems can support the user in decision-making, planning and purchasing processes, and works for well known corporations such as Amazon, Google, Microsoft and AT&T.

Evaluating Recommendation Systems

This paper discusses how to compare recommenders based on a set of properties that are relevant for the application, and focuses on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms.

Recommender Systems - An Introduction

An overview of approaches to developing state-of-the-art recommender systems, including current algorithmic approaches for generating personalized buying proposals, such as collaborative and content-based filtering, as well as more interactive and knowledge-based approaches.

MyMediaLite: a free recommender system library

The library addresses two common scenarios in collaborative filtering: rating prediction and item prediction from positive-only implicit feedback, and contains methods for real-time updates and loading/storing of already trained recommender models.

Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions

This paper presents an overview of the field of recommender systems and describes the current generation of recommendation methods that are usually classified into the following three main

Evaluating collaborative filtering recommender systems

The key decisions in evaluating collaborative filtering recommender systems are reviewed: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole.

GroupLens: applying collaborative filtering to Usenet news

The combination of high volume and personal taste made Usenet news a promising candidate for collaborative filtering and the potential predictive utility for Usenets news was very high.

GroupLens: an open architecture for collaborative filtering of netnews

GroupLens is a system for collaborative filtering of netnews, to help people find articles they will like in the huge stream of available articles, and protect their privacy by entering ratings under pseudonyms, without reducing the effectiveness of the score prediction.

Social information filtering: algorithms for automating “word of mouth”

The implementation of a networked system called Ringo, which makes personalized recommendations for music albums and artists, and four different algorithms for making recommendations by using social information filtering were tested and compared.