• Corpus ID: 42914272

The Demographics of Cool: Popularity and Recommender Performance for Different Groups of Users

@inproceedings{Ekstrand2017TheDO,
  title={The Demographics of Cool: Popularity and Recommender Performance for Different Groups of Users},
  author={Michael D. Ekstrand and Maria Soledad Pera},
  booktitle={RecSys Posters},
  year={2017}
}
ABSTRACT Typical recommender evaluations treat users as an homogeneous unit. However, user subgroups often differ in their tastes, which can result more broadly in diverse recommender needs. Thus, these groups may have different degrees of satisfaction with the provided recommendations. We explore the offline top-N performance of collaborative filtering algorithms across two domains. We find that several strategies achieve higher accuracy for dominant demographic groups, thus increasing the… 

Figures from this paper

Contextual Meta-Bandit for Recommender Systems Selection
TLDR
This work proposes a meta-bandit that acts as a policy over options, where each option maps to a pre-trained, independent recommender system, and finds that it outperforms any of the recommenders separately, as well as an ensemble of them.
Travelers vs. Locals: The Effect of Cluster Analysis in Point-of-Interest Recommendation
TLDR
The results on the Foursquare data set of 139,270 users in five cities show that locals, despite being the most numerous groups of users, tend to obtain lower values than the travelers in terms of ranking accuracy while these locals also seem to receive more novel and diverse POI recommendations.
One-at-a-time: A Meta-Learning Recommender-System for Recommendation-Algorithm Selection on Micro Level
TLDR
This paper proposes a meta-learning-based approach to recommendation, which aims to select the best algorithm for each user-item pair, and develops a distinction between meta-learners that operate per-instance, per-data subset, and per-dataset (global level).
A Novel Approach to Recommendation Algorithm Selection using Meta-Learning
TLDR
This paper proposes a meta-learning-based approach to recommendation, which aims to select the best algorithm for each user-item pair, and develops a distinction between meta-learners that operate per-instance, per-data subset, and per-dataset (global level).
Per-Instance Algorithm Selection for Recommender Systems via Instance Clustering
TLDR
This paper proposes a per-instance meta-learner that clusters data instances and predicts the best algorithm for unseen instances according to cluster membership, and explores the performances of the base algorithms on a ratings dataset and empirically shows that the error of a perfect algorithm selector monotonically decreases for larger pools of algorithm.

References

SHOWING 1-8 OF 8 REFERENCES
Precision-oriented evaluation of recommender systems: an algorithmic comparison
TLDR
In three experiments with three state-of-the-art recommenders, four of the evaluation methodologies are consistent with each other and differ from error metrics, in terms of the comparative recommenders' performance measurements.
Performance of recommender algorithms on top-n recommendation tasks
TLDR
An extensive evaluation of several state-of-the art recommender algorithms suggests that algorithms optimized for minimizing RMSE do not necessarily perform as expected in terms of top-N recommendation task, and new variants of two collaborative filtering algorithms are offered.
Performance prediction and evaluation in recommender systems: An information retrieval perspective
TLDR
This thesis investigates the definition and formalisation of performance predic-tion methods for recommender systems, and evaluates the quality of the proposed solutions in terms of the correlation between the predicted and the observed performance on test data.
Auditing Search Engines for Differential Satisfaction Across Demographics
TLDR
A framework for internally auditing such services for differences in user satisfaction across demographic groups, using search engines as a case study is presented, and three methods for measuring latent differences inuser satisfaction from observed differences in evaluation metrics are proposed.
The MovieLens Datasets: History and Context
TLDR
The history of MovieLens and the MovieLens datasets is documents, including a discussion of lessons learned from running a long-standing, live research platform from the perspective of a research organization, and best practices and limitations of using the Movie Lens datasets in new research are documented.
Rethinking the recommender research ecosystem: reproducibility, openness, and LensKit
TLDR
The utility of LensKit is demonstrated by replicating and extending a set of prior comparative studies of recommender algorithms, and a question recently raised by a leader in the recommender systems community on problems with error-based prediction evaluation is investigated.
Evaluating Recommendation Systems
TLDR
This paper discusses how to compare recommenders based on a set of properties that are relevant for the application, and focuses on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms.