User perception of differences in recommender algorithms

@inproceedings{Ekstrand2014UserPO,
  title={User perception of differences in recommender algorithms},
  author={Michael D. Ekstrand and F. Maxwell Harper and Martijn C. Willemsen and Joseph A. Konstan},
  booktitle={RecSys '14},
  year={2014}
}
Recent developments in user evaluation of recommender systems have brought forth powerful new tools for understanding what makes recommendations effective and useful. We apply these methods to understand how users evaluate recommendation lists for the purpose of selecting an algorithm for finding movies. This paper reports on an experiment in which we asked users to compare lists produced by three common collaborative filtering algorithms on the dimensions of novelty, diversity, accuracy… 

Figures and Tables from this paper

Letting Users Choose Recommender Algorithms: An Experimental Study
TLDR
This study gives users the ability to change the algorithm providing their movie recommendations and studied how they make use of this power, and examines log data from user interactions with this new feature to under-stand whether and how users switch among recommender algorithms, and select a final algorithm to use.
Rating-Based Collaborative Filtering: Algorithms and Evaluation
TLDR
The concepts, algorithms, and means of evaluation that are at the core of collaborative filtering research and practice are reviewed, and two more recent directions in recommendation algorithms are presented: learning-to-rank and ensemble recommendation algorithms.
Putting Users in Control of their Recommendations
TLDR
This work builds and evaluates a system that incorporates user-tuned popularity and recency modifiers, allowing users to express concepts like "show more popular items" and finds that users who are given these controls evaluate the resulting recommendations much more positively.
Towards Recommender Engineering: tools and experiments for identifying recommender differences
TLDR
The LensKit toolkit for conducting experiments on a wide variety of recommender algorithms and data sets under different experimental conditions, along with new developments in object-oriented software configuration to support this toolkit, and experiments on the configuration options of widely-used algorithms to provide guidance on tuning and configuring them are made.
Item Familiarity Effects in User-Centric Evaluations of Recommender Systems
TLDR
The results surprisingly showed that users found non-personalized recommendations of popular items the best match for their preferences, and a measurable correlation between item familiarity and user acceptance was revealed.
User Personality and User Satisfaction with Recommender Systems
TLDR
It is shown that individual users’ preferences for the level of diversity, popularity, and serendipity in recommendation lists cannot be inferred from their ratings alone, and it is suggested that user satisfaction can be improved when users�’ personality traits are integrated into the process of generating recommendations.
Personalized Recommendations for Music Genre Exploration
TLDR
This is one of the first studies using a recommender system to support users' preference development, and provides insights in how recommender systems can help users attain new goals and tastes.
Item Familiarity as a Possible Confounding Factor in User-Centric Recommender Systems Evaluation
TLDR
The results of a preliminary recommender systems user study using Mechanical Turk are reported, which indicates that item familiarity is strongly correlated with overall satisfaction.
Displaying User Profiles to Elicit User Awareness in Recommender Systems
  • Y. Hijikata, K. Okubo, S. Nishida
  • Computer Science
    2015 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT)
  • 2015
TLDR
This research shows users their user profiles created by a recommender system and asks them whether they learn any new knowledge about their preferences or interests, and conducts a user experiment to know whether the users can aware newknowledge about their interests or preferences.
An Empirical Analysis on Transparent Algorithmic Exploration in Recommender Systems
TLDR
A recommender interface that reveals which items are for exploration and conducted a within-subject study with 94 MTurk workers indicated that users left significantly more feedback on items chosen for exploration with the interface, and path analysis show that, in only the new interface, exploration caused to increase user-centric evaluation metrics.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 30 REFERENCES
Explaining the user experience of recommender systems
TLDR
This paper proposes a framework that takes a user-centric approach to recommender system evaluation that links objective system aspects to objective user behavior through a series of perceptual and evaluative constructs (called subjective system aspects and experience, respectively).
Evaluating Recommendation Systems
TLDR
This paper discusses how to compare recommenders based on a set of properties that are relevant for the application, and focuses on comparative studies, where a few algorithms are compared using some evaluation metric, rather than absolute benchmarking of algorithms.
A user-centric evaluation framework for recommender systems
TLDR
A unifying evaluation framework, called ResQue (Recommender systems' Quality of user experience), which aimed at measuring the qualities of the recommended items, the system's usability, usefulness, interface and interaction qualities, users' satisfaction with the systems, and the influence of these qualities on users' behavioral intentions.
Understanding choice overload in recommender systems
TLDR
Investigation of the effect of recommendation set size and set quality on perceived variety, recommendation set attractiveness, choice difficulty and satisfaction with the chosen item shows that larger sets containing only good items do not necessarily result in higher choice satisfaction compared to smaller sets.
Item-based collaborative filtering recommendation algorithms
TLDR
This paper analyzes item-based collaborative ltering techniques and suggests that item- based algorithms provide dramatically better performance than user-based algorithms, while at the same time providing better quality than the best available userbased algorithms.
Improving recommendation lists through topic diversification
TLDR
This work presents topic diversification, a novel method designed to balance and diversify personalized recommendation lists in order to reflect the user's complete spectrum of interests, and introduces the intra-list similarity metric to assess the topical diversity of recommendation lists.
Solving the apparent diversity-accuracy dilemma of recommender systems
TLDR
This paper introduces a new algorithm specifically to address the challenge of diversity and shows how it can be used to resolve this apparent dilemma when combined in an elegant hybrid with an accuracy-focused algorithm.
Evaluating collaborative filtering recommender systems
TLDR
The key decisions in evaluating collaborative filtering recommender systems are reviewed: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole.
An Empirical Analysis of Design Choices in Neighborhood-Based Collaborative Filtering Algorithms
TLDR
An analysis framework is applied that divides the neighborhood-based prediction approach into three components and then examines variants of the key parameters in each component, and identifies the three components identified are similarity computation, neighbor selection, and rating combination.
Don't look stupid: avoiding pitfalls when recommending research papers
TLDR
This work performs a detailed user study with over 130 users to understand differences between recommender algorithms through an online survey of paper recommendations from the ACM Digital Library, and succinctly summarizes the most striking results as "Don't Look Stupid" in front of users.
...
1
2
3
...