How many bits per rating?

@inproceedings{Kluver2012HowMB,
  title={How many bits per rating?},
  author={Daniel Kluver and Tien T. Nguyen and Michael D. Ekstrand and Shilad Sen and John Riedl},
  booktitle={RecSys '12},
  year={2012}
}
Most recommender systems assume user ratings accurately represent user preferences. However, prior research shows that user ratings are imperfect and noisy. Moreover, this noise limits the measurable predictive power of any recommender system. We propose an information theoretic framework for quantifying the preference information contained in ratings and predictions. We computationally explore the properties of our model and apply our framework to estimate the efficiency of different rating… 

Figures and Tables from this paper

Coherence and inconsistencies in rating behavior: estimating the magic barrier of recommender systems
TLDR
This work presents a mathematical characterization of the magic barrier based on the assumption that user ratings are afflicted with inconsistencies—noise, and proposes a measure of the consistency of user ratings (rating coherence) that predicts the performance of recommendation methods.
Correcting noisy ratings in collaborative recommender systems
Rating support interfaces to improve user experience and recommender accuracy
TLDR
This study introduces interfaces that provide methods for supporting the mapping process for recommender systems by reminding the user of characteristics of items by providing personalized tags and relating rating decisions to prior rating decisions using exemplars.
Collaborative Filtering with Noisy Ratings
TLDR
In NORMA, an adaptive weighting strategy is proposed to decrease the gradient updates of noisy ratings, so that the learned MA models will be less prone to the noisy ratings.
How do People Sort by Ratings?
TLDR
This paper collected 48,000 item-ranking pairs from 4,000 crowd workers along with 4,800 rationales, and analyzed the results to understand how users make decisions when comparing rated items and shed light on the cognitive models users employ to choose between rating distributions.
The Magic Barrier of Recommender Systems - No Magic, Just Ratings
TLDR
The inconsistencies of the user impose a lower bound on the error the system may achieve when predicting ratings for that particular user.
Evaluating the Accuracy and Utility of Recommender Systems
TLDR
It is concluded that current recommendation quality has outgrown the methods and metrics used for the evaluation of these systems, and qualitative approaches can be used, with minimal user interference, to correctly estimate the actual quality of recommendation systems.
Modeling User Preferences in Recommender Systems
TLDR
A classification framework for the use of explicit and implicit user feedback in recommender systems based on a set of distinct properties that include Cognitive Effort, User Model, Scale of Measurement, and Domain Relevance is proposed.
Improving recommender systems: user roles and lifecycles
TLDR
This dissertation investigates how the data collection methods and the life cycles of users affect the prediction accuracies and the performance of recommendation algorithms.
Interacting with Recommenders—Overview and Research Directions
TLDR
This work provides a comprehensive overview on the existing literature on user interaction aspects in recommender systems, covering existing approaches for preference elicitation and result presentation, as well as proposals that consider recommendation as an interactive process.
...
1
2
3
4
...

References

SHOWING 1-10 OF 29 REFERENCES
I Like It... I Like It Not: Evaluating User Ratings Noise in Recommender Systems
TLDR
This paper presents a user study aimed at quantifying the noise in user ratings that is due to inconsistencies, and analyzes how factors such as item sorting and time of rating affect this noise.
Rate it again: increasing recommendation accuracy by user re-rating
TLDR
A novel approach to improve RS accuracy by reducing the natural noise in the input data via a preprocessing step and proposing a novel algorithm to denoise existing datasets by means of re-rating: i.e. by asking users to rate previously rated items again.
Is seeing believing?: how recommender system interfaces affect users' opinions
TLDR
Two aspects of recommender system interfaces that may affect users' opinions are studied: the rating scale and the display of predictions at the time users rate items, finding that users rate fairly consistently across rating scales.
Evaluating collaborative filtering recommender systems
TLDR
The key decisions in evaluating collaborative filtering recommender systems are reviewed: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole.
Rating: how difficult is it?
TLDR
Comparing four different rating scales: unary ("like it"), binary (thumbs up / thumbs down), five-star, and a 100-point slider suggests guidelines for designers choosing between rating scales.
Evaluating Recommendation Systems
TLDR
This paper discusses how to compare recommenders based on a set of properties that are relevant for the application, and focuses on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms.
An Economic Model of User Rating in an Online Recommender System
TLDR
It is found that while economic modeling in this domain requires an initial understanding of user behavior and access to an uncommonly broad set of user survey and behavioral data, it returns significant formal understanding of the activity being modeled.
Tagommenders: connecting users to items through tags
TLDR
Algorithms combining tags with recommenders may deliver both the automation inherent in recommenders, and the flexibility and conceptual comprehensibility inherent in tagging systems, and they may lead to flexible recommender systems that leverage the characteristics of items users find most important.
Item-based collaborative filtering recommendation algorithms
TLDR
This paper analyzes item-based collaborative ltering techniques and suggests that item- based algorithms provide dramatically better performance than user-based algorithms, while at the same time providing better quality than the best available userbased algorithms.
Detecting noise in recommender system databases
TLDR
A framework that enables the detection of noise in recommender system databases and devise techniques that enable system administrators to identify and remove from the recommendation process any such noise that is present in the data.
...
1
2
3
...