Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)
@article{Geng2022RecommendationAL, title={Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt \& Predict Paradigm (P5)}, author={Shijie Geng and Shuchang Liu and Zuohui Fu and Yingqiang Ge and Yongfeng Zhang}, journal={Proceedings of the 16th ACM Conference on Recommender Systems}, year={2022} }
For a long time, different recommendation tasks require designing task-specific architectures and training objectives. As a result, it is hard to transfer the knowledge and representations from one task to another, thus restricting the generalization ability of existing recommendation approaches. To deal with such issues, considering that language can describe almost anything and language grounding is a powerful medium to represent various problems or tasks, we present a flexible and unified…
Figures and Tables from this paper
15 Citations
TransRec: Learning Transferable Recommendation from Mixture-of-Modality Feedback
- Computer ScienceArXiv
- 2022
The results suggest that learning neural recommendation models from MoM feedback provides a promising way to realize universal RS, and propose TransRec, a very simple modification made on the popular ID-based RS framework.
Learning Large-scale Universal User Representation with Sparse Mixture of Experts
- Computer ScienceArXiv
- 2022
This paper proposes SUPERMOE, a generic framework to obtain high quality user representation from multiple tasks, and designs a new loss function with task indicators to deal with seesaw phenomenon when learning across multiple tasks.
AutoLossGen: Automatic Loss Function Generation for Recommender Systems
- Computer ScienceSIGIR
- 2022
This paper proposes an automatic loss function generation framework, AutoLossGen, which is able to generate loss functions directly constructed from basic mathematical operators without prior knowledge on loss structure, and shows that the generated loss gives better recommendation performance than commonly used baseline losses.
Learn Basic Skills and Reuse: Modularized Adaptive Neural Architecture Search (MANAS)
- Computer ScienceCIKM
- 2022
Experiments on different datasets show that the adaptive architecture assembled by MANAS outperforms static global architectures and borrows the idea from modularized neural logic reasoning and consider three basic logical operation modules: AND, OR, NOT.
Fairness in Recommendation: A Survey
- Computer ScienceArXiv
- 2022
This survey focuses on the foundations for fairness in recommendation literature with a focus on the taxonomies of current fairness definitions, the typical techniques for improving fairness, as well as the datasets for fairness studies in recommendation.
A Survey on Trustworthy Recommender Systems
- Computer ScienceArXiv
- 2022
This survey will introduce techniques related to trustworthy and responsible recommendation, including but not limited to explainable recommendation, fairness in recommendation, privacy-aware recommendation, robustness in recommendation; as well as the relationship between these different perspectives in terms of trustworthy andresponsible recommendation.
Explainable Fairness in Recommendation
- Computer ScienceSIGIR
- 2022
A Counterfactual Explainable Fairness framework is proposed, called CEF, which generates explanations about model fairness that can improve the fairness without significantly hurting the performance and guides the design of fair recommender systems with a more informed and unified methodology.
Pivotal Role of Language Modeling in Recommender Systems: Enriching Task-specific and Task-agnostic Representation Learning
- Computer ScienceArXiv
- 2022
It is shown that language modeling applied directly to task-specific user histories achieves excellent results on diverse recommendation tasks and can provide promising transfer learning capabilities for a broad spectrum of real-world recommender systems, even on unseen domains and services.
Causal Inference for Recommendation: Foundations, Methods and Applications
- Computer ScienceArXiv
- 2023
In this survey, the fundamental concepts of both recommender systems and causal inference as well as their relationship are discussed, and the existing work on causal methods for different problems inRecommender systems are reviewed.
BioReader: a Retrieval-Enhanced Text-to-Text Transformer for Biomedical Literature
- Computer ScienceEMNLP
- 2022
This work introduces BioReader, the first retrieval-enhanced text-to-text model for biomedical natural language processing and shows that domain knowledge can be easily altered or supplemented to make the model generate correct predictions bypassing the retraining step and thus addressing the literature overload issue.
References
SHOWING 1-10 OF 85 REFERENCES
Personalized Transformer for Explainable Recommendation
- Computer ScienceACL
- 2021
A PErsonalized Transformer for Explainable Recommendation (PETER), on which a simple and effective learning objective is designed that utilizes the IDs to predict the words in the target explanation, so as to endow the IDs with linguistic meanings and to achieve personalized Transformer.
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
- Computer ScienceNAACL
- 2021
This work explores the idea of learning prompts by gradient descent—either fine-tuning prompts taken from previous work, or starting from random initialization, showing that the implicit factual knowledge in language models was previously underestimated.
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
- Computer ScienceJ. Mach. Learn. Res.
- 2020
This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.
Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing
- Computer ScienceACM Computing Surveys
- 2022
The basics of this promising paradigm in natural language processing are introduced, a unified set of mathematical notations that can cover a wide variety of existing work are described, and existing work is organized along several dimensions.
Prefix-Tuning: Optimizing Continuous Prompts for Generation
- Computer ScienceACL
- 2021
Prefix-tuning is proposed, a lightweight alternative to fine- Tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous task-specific vectors, which is called the prefix.
Language Models are Unsupervised Multitask Learners
- Computer Science
- 2019
It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.
Generate Neural Template Explanations for Recommendation
- Computer ScienceCIKM
- 2020
Experimental results on real-world datasets show that NETE consistently outperforms state-of-the-art explanation generation approaches in terms of sentence quality and expressiveness and analysis on case study shows the advantages of NETE on generating diverse and controllable explanations.
Learning to Prompt for Vision-Language Models
- Computer ScienceInternational Journal of Computer Vision
- 2022
Context Optimization (CoOp) is proposed, a simple approach specifically for adapting CLIP-like vision-language models for downstream image recognition that achieves superb domain generalization performance compared with the zero-shot model using hand-crafted prompts.
Unifying Vision-and-Language Tasks via Text Generation
- Computer ScienceICML
- 2021
This work proposes a unified framework that learns different tasks in a single architecture with the same language modeling objective, i.e., multimodal conditional text generation, where the models learn to generate labels in text based on the visual and textual inputs.
Multitask Prompted Training Enables Zero-Shot Task Generalization
- Computer ScienceICLR
- 2022
A system for easily mapping any natural language tasks into a human-readable prompted form and fine-tune a pretrained encoder-decoder model on this multitask mixture covering a wide variety of tasks.