M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems

@article{Cui2022M6RecGP,
  title={M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems},
  author={Zeyu Cui and Jianxin Ma and Chang Zhou and Jingren Zhou and Hongxia Yang},
  journal={ArXiv},
  year={2022},
  volume={abs/2205.08084}
}
Industrial recommender systems have been growing increasingly complex, may involve diverse domains such as e-commerce products and user-generated contents, and can comprise a myriad of tasks such as retrieval, ranking, explanation generation, and even AI-assisted content production. The mainstream approach so far is to develop individual algorithms for each domain and each task. In this paper, we explore the possibility of developing a unified foundation model to support open-ended domains and… 

References

SHOWING 1-10 OF 68 REFERENCES
Parameter-Efficient Transfer from Sequential Behaviors for User Modeling and Recommendation
TLDR
This paper develops a parameter-efficient transfer learning architecture, termed as PeterRec, which can be configured on-the-fly to various downstream tasks, and shows that PeterRec performs efficient transfer learning in multiple domains, where it achieves comparable or sometimes better performance relative to fine-tuning the entire model parameters.
Zero-Shot Recommendation as Language Modeling
TLDR
A framework for recommendation with off-the-shelf pretrained language models (LM) that only used unstructured text corpora as training data is proposed and compared with standard matrix factorization trained on different data regimes.
Contrastive Learning for Debiased Candidate Generation in Large-Scale Recommender Systems
TLDR
CLRec is designed, a contrastive learning method to improve DCG in terms of fairness, effectiveness and efficiency in recommender systems with extremely large candidate size, and improved upon CLRec and proposes Multi-CLRec, for accurate multi-intention aware bias reduction.
Pre-trained Language Model based Ranking in Baidu Search
TLDR
A novel practice to cost-efficiently summarize the web document and contextualize the resultant summary content with the query using a cheap yet powerful Pyramid-ERNIE architecture and a human-anchored fine-tuning strategy tailored for the online ranking system, aiming to stabilize the ranking signals across various online components.
Personalized Transformer for Explainable Recommendation
TLDR
A PErsonalized Transformer for Explainable Recommendation (PETER), on which a simple and effective learning objective is designed that utilizes the IDs to predict the words in the target explanation, so as to endow the IDs with linguistic meanings and to achieve personalized Transformer.
Knowledge Transfer via Pre-training for Recommendation: A Review and Prospect
TLDR
This survey provides a review of recommender systems with pre-training, and shows the benefits of pre- training toRecommender systems through experiments, and discusses several promising directions for future research.
Generate Neural Template Explanations for Recommendation
TLDR
Experimental results on real-world datasets show that NETE consistently outperforms state-of-the-art explanation generation approaches in terms of sentence quality and expressiveness and analysis on case study shows the advantages of NETE on generating diverse and controllable explanations.
Improving Conversational Recommender Systems via Knowledge Graph based Semantic Fusion
TLDR
This work incorporates both word-oriented and entity-oriented knowledge graphs~(KG) to enhance the data representations in CRSs, and adopts Mutual Information Maximization to align the word-level andentity-level semantic spaces.
Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects
TLDR
This work proposes an ‘extractive’ approach to identify review segments which justify users’ intentions and designs two personalized generation models which can generate diverse justifications based on templates extracted from justification histories.
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
TLDR
This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.
...
...