Recommender Response to Diversity and Popularity Bias in User Profiles

  title={Recommender Response to Diversity and Popularity Bias in User Profiles},
  author={Sushma Channamsetty and Michael D. Ekstrand},
  booktitle={The Florida AI Research Society},
Recommender system evaluation usually focuses on the overall effectiveness of the algorithms, either in terms of measurable accuracy or ability to deliver user satisfaction or improve business metrics. When additional factors are considered, such as the diversity or novelty of the recommendations, the focus typically remains on the algorithm’s overall performance. We examine the relationship of the recommender’s output characteristics – accuracy, popularity (as an inverse of novelty), and… 

Evaluating content novelty in recommender systems

The findings demonstrate that the proposed measures yield consistent and interpretable results, producing insights that reduce the impact of popularity bias in the evaluation of recommender systems.

The Effect of Algorithmic Bias on Recommender Systems for Massive Open Online Courses

This paper compares existing algorithms and their recommended lists against biases related to course popularity, catalog coverage, and course category popularity, and remarks even more the need of better understanding how recommenders react against bias in diverse contexts.

Exploring author gender in book rating and recommendation

It is found that common collaborative filtering algorithms tend to propagate at least some of each user’s tendency to rate or read male or female authors into their resulting recommendations, although they differ in both the strength of this propagation and the variance in the gender balance of the recommendation lists they produce.

Neighborhood Construction through Item Popularity in Collaborative Methods

The aim is to show the usefulness of popularity as a significant indication in the creation of recommendations through collaborative methods and interpret the results through an additional set of techniques and assess the impact of the implemented strategies over the long tail.

Measuring Recommender System Effects with Simulated Users

This work offers a simulation framework for measuring the impact of a recommender system under different types of user behavior, and presents two empirical case studies to understand how popularity bias manifests over time.

Crank up the Volume: Preference Bias Amplification in Collaborative Recommendation

Bias disparity is examined over a range of different algorithms and for different item categories and significant differences between model-based and memory-based algorithms are demonstrated.

Analyzing and improving stability of matrix factorization for recommender systems

This paper focuses on the effects of training the same model on the same data, but with different initial values for the latent representations of users and items, and presents a generalization of MF called Nearest Neighbors Matrix Factorization (NNMF), which largely improves the stability of both representations and recommendations.

On the instability of embeddings for recommender systems: the case of matrix factorization

A generalization of MF is presented, called Nearest Neighbors Matrix Factorization (NNMF), which propagates the information about items and users to their neighbors, speeding up the training procedure and extending the amount of information that supports recommendations and representations.

Fair Top-k Ranking with multiple protected groups

From Recommendation to Curation: When the System Becomes your Personal Docent

This work considers multiple data sources to enhance the recommendation process, as well as the quality and diversity of the provided suggestions, and pair each suggestion with an explanation that showcases why a book was recommended with the aim of easing the decision making process for the user.



Novelty and Diversity in Top-N Recommendation -- Analysis and Evaluation

It is argued that the motivation of diversity research is to increase the probability of retrieving unusual or novel items which are relevant to the user and a methodology to evaluate their performance in terms of novel item retrieval is introduced.

Rank and relevance in novelty and diversity metrics for recommender systems

A formal framework for the definition of novelty and diversity metrics is presented that unifies and generalizes several state of the art metrics and identifies three essential ground concepts at the roots of noveltyand diversity: choice, discovery and relevance, upon which the framework is built.

Improving recommendation lists through topic diversification

This work presents topic diversification, a novel method designed to balance and diversify personalized recommendation lists in order to reflect the user's complete spectrum of interests, and introduces the intra-list similarity metric to assess the topical diversity of recommendation lists.

Item-based collaborative filtering recommendation algorithms

This paper analyzes item-based collaborative ltering techniques and suggests that item- based algorithms provide dramatically better performance than user-based algorithms, while at the same time providing better quality than the best available userbased algorithms.

When recommenders fail: predicting recommender failure for algorithm selection and combination

This work presents an analysis of the predictions made by several well-known recommender algorithms on the MovieLens 10M data set, showing that for many cases in which one algorithm fails, there is another that will correctly predict the rating.

A Survey of Accuracy Evaluation Metrics of Recommendation Tasks

This paper reviews the proper construction of offline experiments for deciding on the most appropriate algorithm, and discusses three important tasks of recommender systems, and classify a set of appropriate well known evaluation metrics for each task.

Collaborative filtering recommender systems

This study presents an overview of the field of recommender systems with current generation of recommendation methods and examines comprehensively CF systems with its algorithms.

Being accurate is not enough: how accuracy metrics have hurt recommender systems

This paper proposes informal arguments that the recommender community should move beyond the conventional accuracy metrics and their associated experimental methodologies, and proposes new user-centric directions for evaluating recommender systems.

An Empirical Analysis of Design Choices in Neighborhood-Based Collaborative Filtering Algorithms

An analysis framework is applied that divides the neighborhood-based prediction approach into three components and then examines variants of the key parameters in each component, and identifies the three components identified are similarity computation, neighbor selection, and rating combination.

BPR: Bayesian Personalized Ranking from Implicit Feedback

This paper presents a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem and provides a generic learning algorithm for optimizing models with respect to B PR-Opt.