Robust Cost-Sensitive Learning for Recommendation with Implicit Feedback

@inproceedings{Yang2018RobustCL,
  title={Robust Cost-Sensitive Learning for Recommendation with Implicit Feedback},
  author={Peng Yang and Peilin Zhao and Xin Gao and Yong Liu},
  booktitle={SDM},
  year={2018}
}
Recommendation is the task of improving customer experience through personalized recommendation based on users' past feedback. In this paper, we investigate the most common scenario: the user-item (U-I) matrix of implicit feedback. Even though many recommendation approaches are designed based on implicit feedback, they attempt to project the U-I matrix into a low-rank latent space, which is a strict restriction that rarely holds in practice. In addition, although misclassification costs from… 

Figures and Tables from this paper

Robust Asymmetric Recommendation via Min-Max Optimization

TLDR
A robust asymmetric recommendation model that integrates cost-sensitive learning with capped unilateral loss into a joint objective function, which can be optimized by an iteratively weighted approach and which demonstrates the effectiveness of the algorithm on benchmark recommendation datasets.

Online Collaborative Filtering with Implicit Feedback

TLDR
A divestiture loss is proposed to heal the bias derived from the past mis-classified negative samples and a cost-sensitive learning method is adopted to efficiently optimize the implicit MF model without imposing a heuristic weight restriction on missing data.

A Hybrid Bandit Framework for Diversified Recommendation

TLDR
The Linear Modular Dispersion Bandit (LMDB) framework is proposed, which is an online learning setting for optimizing a combination of modular functions and dispersion functions, and a learning algorithm is developed to solve the LMDB problem and derive a gap-free bound on its n-step regret.

Bootstrap Latent Representations for Multi-modal Recommendation

TLDR
A novel self-supervised multi-modal recommendation model, dubbed BM3, which requires neither augmentations from auxiliary graphs nor negative samples and alleviates both the need for contrasting with negative examples and the complex graph augmentation from an additional target network for contrastive view generation is proposed.

Contextualized Point-of-Interest Recommendation

TLDR
This paper proposes a new framework for POI recommendation, which explicitly utilizes similarity with contextual information, and outperforms all the state-of-the-art methods.

Diversified Interactive Recommendation with Implicit Feedback

TLDR
A novel diversified recommendation model, named Diversified Contextual Combinatorial Bandit (DC2B), is proposed for interactive recommendation with users' implicit feedback that employs determinantal point process in the recommendation procedure to promote diversity of the recommendation results.

Learning Hierarchical Review Graph Representation for Recommendation

TLDR
This paper proposes a novel review-based recommendation model, named Review Graph Neural Network (RGNN), which builds a specific review graph for each individual user/item, where nodes represent the review words and edges describe the connection types between words.

Layer-refined Graph Convolutional Networks for Recommendation

TLDR
A layer-refined GCN model is proposed, dubbed LayerGCN, that refines layer representations during information propagation and node updating of GCN and prunes the edges of the user-item interaction graph following a degree-sensitive probability instead of the uniform distribution.

Multi-interest Aware Recommendation in CrowdIntell Network

  • Yixin ZhangW. HeLi-zhen CuiLei LiuZhongmin Yan
  • Computer Science
    2020 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom)
  • 2020
TLDR
A Multi-interest Aware Recommendation in CrowdIntell Network is proposed, which involves the embedding layer, two-stage feature extraction layer and full connection layer, and outperforms state-of-the-art sequential recommendation methods.

Modeling of Multilayer Multicontent Latent Tree and Its Applications

TLDR
A multilayer LTM is presented to deal with the hierarchical clustering issues of multicontent variables and an incremental update approach for ML-LTM is proposed that can save five-sixth updating time comparing with the whole-model retraining approach for achieving the same recommendation accuracy.

References

SHOWING 1-10 OF 49 REFERENCES

BPR: Bayesian Personalized Ranking from Implicit Feedback

TLDR
This paper presents a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem and provides a generic learning algorithm for optimizing models with respect to B PR-Opt.

Collaborative Filtering for Implicit Feedback Datasets

TLDR
This work identifies unique properties of implicit feedback datasets and proposes treating the data as indication of positive and negative preference associated with vastly varying confidence levels, which leads to a factor model which is especially tailored for implicit feedback recommenders.

One-Class Matrix Completion with Low-Density Factorizations

TLDR
This paper proposes an approach to explicitly deal with ambiguity in the customer-product matrix by treating the unobserved entries as optimization variables, similar to how Transductive SVMs implement the low-density separation principle for semi-supervised learning.

One-Class Collaborative Filtering

TLDR
This paper considers the one-class problem under the CF setting, and proposes two frameworks to tackle OCCF, one based on weighted low rank approximation; the other based on negative example sampling.

Learning Correlative and Personalized Structure for Online Multi-Task Classification

TLDR
This paper proposes a general online MTL framework that overcomes this restriction by decomposing the weight matrix into two components: the first component captures the correlative structure among tasks in a low-rank subspace, and the second component identifies the personalized patterns for the outlier tasks.

Robust Online Multi-Task Learning with Correlative and Personalized Structures

TLDR
This paper proposes a robust online MTL framework that overcomes the restriction of task relatedness into a presumed structure via a single weight matrix by decomposing the weight matrix into two components: the first one captures the low-rank common structure among tasks via a nuclear norm and the second one identifies the personalized patterns of outlier tasks through a group lasso.

Compound classification models for recommender systems

  • L. Schmidt-Thieme
  • Computer Science
    Fifth IEEE International Conference on Data Mining (ICDM'05)
  • 2005
TLDR
This work investigates two particularities of the plain recommendation task without attributes as a multi-class classification problem, its autocorrelation structure as well as the absence of re-occurring items (repeat buying), and adapt the standard generic reductions 1-vs-rest and 1- vs-l of multi- class problems to a set of binary classification problems to provide a generic compound classifier for recommender systems.

Improving maximum margin matrix factorization

TLDR
A number of extensions to MMMF by introducing offset terms, item dependent regularization and a graph kernel on the recommender graph are discussed, showing equivalence between graph kernels and the recent MMMf extensions by Mnih and Salakhutdinov.

A Survey of Collaborative Filtering Techniques

TLDR
From basic techniques to the state-of-the-art, this paper attempts to present a comprehensive survey for CF techniques, which can be served as a roadmap for research and practice in this area.

The Foundations of Cost-Sensitive Learning

TLDR
It is argued that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods, and the recommended way of applying one of these methods is to learn a classifier from the training set and then to compute optimal decisions explicitly using the probability estimates given by the classifier.