Incorporating Distinct Opinions in Content Recommender System

@inproceedings{Lee2015IncorporatingDO,
  title={Incorporating Distinct Opinions in Content Recommender System},
  author={Grace E. Lee and Keejun Han and Mun Yong Yi},
  booktitle={AIRS},
  year={2015}
}
As the media content industry is growing continuously, the content market has become very competitive. Various strategies such as advertising and Word-of-Mouth (WOM) have been used to draw people’s attention. It is hard for users to be completely free of others’ influences and thus to some extent their opinions become affected and biased. In the field of recommender systems, prior research on biased opinions has attempted to reduce and isolate the effects of external influences in… 

Search Personalization in Folksonomy by Exploiting Multiple and Temporal Aspects of User Profiles

A search personalization framework is proposed that constructs a user profile network with identification of the multiple topics of the user and the temporal values of tags and it consistently outperforms all of the compared models under the conditions of the best combination of ranking functions and link analysis techniques.

References

SHOWING 1-10 OF 19 REFERENCES

Opinion-Based Collaborative Filtering to Solve Popularity Bias in Recommender Systems

This paper proposes an opinion-based collaborative filtering by introducing weighting functions to adjust the influence of popular items in order to solve the popularity bias problem in recommender systems.

Instant foodie: predicting expert ratings from grassroots

This paper examines the two different approaches to collecting user ratings of restaurants and investigates the question of whether it is possible to reconcile them, and studies the problem of inferring the more calibrated Zagat Survey ratings from the user-generated ratings in Google Places.

Learning preferences of new users in recommender systems: an information theoretic approach

The work of [23] is extended by incrementally developing a set of information theoretic strategies for the new user problem by proposing an offline simulation framework and evaluating the strategies through extensive offline simulations and an online experiment with real users of a live recommender system.

Factorization meets the neighborhood: a multifaceted collaborative filtering model

The factor and neighborhood models can now be smoothly merged, thereby building a more accurate combined model and a new evaluation metric is suggested, which highlights the differences among methods, based on their performance at a top-K recommendation task.

Getting to know you: learning new user preferences in recommender systems

Six techniques that collaborative filtering recommender systems can use to learn about new users are studied, showing that the choice of learning technique significantly affects the user experience, in both the user effort and the accuracy of the resulting predictions.

Evaluating collaborative filtering recommender systems

The key decisions in evaluating collaborative filtering recommender systems are reviewed: the user tasks being evaluated, the types of analysis and datasets being used, the ways in which prediction quality is measured, the evaluation of prediction attributes other than quality, and the user-based evaluation of the system as a whole.

Context adaptation in interactive recommender systems

This paper introduces an interactive recommender system that can detect and adapt to changes in context based on the user's ongoing behavior, and uses Thompson sampling heuristic to learn a model for the user.

Context-Aware SVM for Context-Dependent Information Recommendation

The purpose of this study is to propose Context-Aware Support Vector Machine (C-SVM) for application in a context-dependent recommendation system. It is important to consider users’ contexts in

Hybrid Recommendation Models for Binary User Preference Prediction Problem

The task 2 is called binary user preference prediction problem in the paper because it aims at separating tracks rated highly by specific users from tracks not rated by them, and the solutions of this task can be easily applied to binary user behavior data.

Rating: how difficult is it?

Comparing four different rating scales: unary ("like it"), binary (thumbs up / thumbs down), five-star, and a 100-point slider suggests guidelines for designers choosing between rating scales.