How many bits per rating?

@inproceedings{Kluver2012HowMB,
  title={How many bits per rating?},
  author={Daniel Kluver and Tien T. Nguyen and Michael D. Ekstrand and Shilad Sen and John Riedl},
  booktitle={ACM Conference on Recommender Systems},
  year={2012}
}
Most recommender systems assume user ratings accurately represent user preferences. However, prior research shows that user ratings are imperfect and noisy. Moreover, this noise limits the measurable predictive power of any recommender system. We propose an information theoretic framework for quantifying the preference information contained in ratings and predictions. We computationally explore the properties of our model and apply our framework to estimate the efficiency of different rating… 

Figures and Tables from this paper

Coherence and inconsistencies in rating behavior: estimating the magic barrier of recommender systems

This work presents a mathematical characterization of the magic barrier based on the assumption that user ratings are afflicted with inconsistencies—noise, and proposes a measure of the consistency of user ratings (rating coherence) that predicts the performance of recommendation methods.

Correcting noisy ratings in collaborative recommender systems

Rating support interfaces to improve user experience and recommender accuracy

This study introduces interfaces that provide methods for supporting the mapping process for recommender systems by reminding the user of characteristics of items by providing personalized tags and relating rating decisions to prior rating decisions using exemplars.

Collaborative Filtering with Noisy Ratings

In NORMA, an adaptive weighting strategy is proposed to decrease the gradient updates of noisy ratings, so that the learned MA models will be less prone to the noisy ratings.

How do People Sort by Ratings?

This paper collected 48,000 item-ranking pairs from 4,000 crowd workers along with 4,800 rationales, and analyzed the results to understand how users make decisions when comparing rated items and shed light on the cognitive models users employ to choose between rating distributions.

The Magic Barrier of Recommender Systems - No Magic, Just Ratings

The inconsistencies of the user impose a lower bound on the error the system may achieve when predicting ratings for that particular user.

Evaluating the Accuracy and Utility of Recommender Systems

It is concluded that current recommendation quality has outgrown the methods and metrics used for the evaluation of these systems, and qualitative approaches can be used, with minimal user interference, to correctly estimate the actual quality of recommendation systems.

Modeling User Preferences in Recommender Systems

A classification framework for the use of explicit and implicit user feedback in recommender systems based on a set of distinct properties that include Cognitive Effort, User Model, Scale of Measurement, and Domain Relevance is proposed.

Improving recommender systems: user roles and lifecycles

This dissertation investigates how the data collection methods and the life cycles of users affect the prediction accuracies and the performance of recommendation algorithms.

Interacting with Recommenders—Overview and Research Directions

This work provides a comprehensive overview on the existing literature on user interaction aspects in recommender systems, covering existing approaches for preference elicitation and result presentation, as well as proposals that consider recommendation as an interactive process.

References

SHOWING 1-10 OF 25 REFERENCES

I Like It... I Like It Not: Evaluating User Ratings Noise in Recommender Systems

This paper presents a user study aimed at quantifying the noise in user ratings that is due to inconsistencies, and analyzes how factors such as item sorting and time of rating affect this noise.

Rate it again: increasing recommendation accuracy by user re-rating

A novel approach to improve RS accuracy by reducing the natural noise in the input data via a preprocessing step and proposing a novel algorithm to denoise existing datasets by means of re-rating: i.e. by asking users to rate previously rated items again.

Is seeing believing?: how recommender system interfaces affect users' opinions

Two aspects of recommender system interfaces that may affect users' opinions are studied: the rating scale and the display of predictions at the time users rate items, finding that users rate fairly consistently across rating scales.

Rating: how difficult is it?

Comparing four different rating scales: unary ("like it"), binary (thumbs up / thumbs down), five-star, and a 100-point slider suggests guidelines for designers choosing between rating scales.

Evaluating Recommendation Systems

This paper discusses how to compare recommenders based on a set of properties that are relevant for the application, and focuses on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms.

An Economic Model of User Rating in an Online Recommender System

It is found that while economic modeling in this domain requires an initial understanding of user behavior and access to an uncommonly broad set of user survey and behavioral data, it returns significant formal understanding of the activity being modeled.

Item-based collaborative filtering recommendation algorithms

This paper analyzes item-based collaborative ltering techniques and suggests that item- based algorithms provide dramatically better performance than user-based algorithms, while at the same time providing better quality than the best available userbased algorithms.

Detecting noise in recommender system databases

A framework that enables the detection of noise in recommender system databases and devise techniques that enable system administrators to identify and remove from the recommendation process any such noise that is present in the data.

Collaborative recommendation: A robustness analysis

This work analyzes the robustness of collaborative recommendation: the ability to make recommendations despite (possibly intentional) noisy product ratings, and formalizes recommendation accuracy in machine learning terms and develops theoretically justified models of accuracy.

Eigentaste: A Constant Time Collaborative Filtering Algorithm

This work compares Eigentaste to alternative algorithms using data from Jester, an online joke recommending system, and uses the Normalized Mean Absolute Error (NMAE) measure to compare performance of different algorithms.