• Publications
  • Influence
Explaining the user experience of recommender systems
TLDR
This paper proposes a framework that takes a user-centric approach to recommender system evaluation that links objective system aspects to objective user behavior through a series of perceptual and evaluative constructs (called subjective system aspects and experience, respectively). Expand
User perception of differences in recommender algorithms
TLDR
It is found that satisfaction is negatively dependent on novelty and positively dependent on diversity in this setting, and that satisfaction predicts the user's final selection of a recommender that they would like to use in the future. Expand
Process Models Deserve Process Data: Comment on Brandstatter, Gigerenzer, and Hertwig (2006)
TLDR
The results suggest that although the priority heuristic captures some variability in the attention paid to outcomes, it fails to account for major characteristics of the data, particularly the frequent transitions between outcomes and their probabilities. Expand
Each to his own: how different users call for different interaction methods in recommender systems
TLDR
The results show that most users (and particularly domain experts) are most satisfied with a hybrid recommender that combines implicit and explicit preference elicitation, but that novices and maximizers seem to benefit more from a non-personalizedRecommender that just displays the most popular items. Expand
Behaviorism is Not Enough: Better Recommendations through Listening to Users
TLDR
It is argued that listening to what users say about the items and recommendations they like, the control they wish to exert on the output, and the ways in which they perceive the system will enable important developments in the future of recommender systems. Expand
Decision anomalies, experimenter assumptions, and participants' comprehension : revaluating the uncertainty effect
The above article (DOI: 10.1002/bdm.628) was published online on 14 November 2008 inWiley InterScience (www.interscience.wiley.com). An error was subsequently identified: Page 3, line 22:Expand
Evaluating Recommender Systems with User Experiments
TLDR
This chapter provides a detailed practical description of how to conduct user experiments, covering the following topics: formulating hypotheses, sampling participants, creating experimental manipulations, measuring subjective constructs with questionnaires, and statistically evaluating the results. Expand
Understanding choice overload in recommender systems
TLDR
Investigation of the effect of recommendation set size and set quality on perceived variety, recommendation set attractiveness, choice difficulty and satisfaction with the chosen item shows that larger sets containing only good items do not necessarily result in higher choice satisfaction compared to smaller sets. Expand
A pragmatic procedure to support the user-centric evaluation of recommender systems
TLDR
This work introduces a pragmatic procedure to evaluate recommender systems for experience products with test users, within industry constraints on time and budget. Expand
...
1
2
3
4
5
...