Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5)

@article{Geng2022RecommendationAL,
  title={Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt \& Predict Paradigm (P5)},
  author={Shijie Geng and Shuchang Liu and Zuohui Fu and Yingqiang Ge and Yongfeng Zhang},
  journal={Proceedings of the 16th ACM Conference on Recommender Systems},
  year={2022}
}
For a long time, different recommendation tasks require designing task-specific architectures and training objectives. As a result, it is hard to transfer the knowledge and representations from one task to another, thus restricting the generalization ability of existing recommendation approaches. To deal with such issues, considering that language can describe almost anything and language grounding is a powerful medium to represent various problems or tasks, we present a flexible and unified… 

TransRec: Learning Transferable Recommendation from Mixture-of-Modality Feedback

The results suggest that learning neural recommendation models from MoM feedback provides a promising way to realize universal RS, and propose TransRec, a very simple modification made on the popular ID-based RS framework.

Learning Large-scale Universal User Representation with Sparse Mixture of Experts

This paper proposes SUPERMOE, a generic framework to obtain high quality user representation from multiple tasks, and designs a new loss function with task indicators to deal with seesaw phenomenon when learning across multiple tasks.

AutoLossGen: Automatic Loss Function Generation for Recommender Systems

This paper proposes an automatic loss function generation framework, AutoLossGen, which is able to generate loss functions directly constructed from basic mathematical operators without prior knowledge on loss structure, and shows that the generated loss gives better recommendation performance than commonly used baseline losses.

Learn Basic Skills and Reuse: Modularized Adaptive Neural Architecture Search (MANAS)

Experiments on different datasets show that the adaptive architecture assembled by MANAS outperforms static global architectures and borrows the idea from modularized neural logic reasoning and consider three basic logical operation modules: AND, OR, NOT.

Fairness in Recommendation: A Survey

This survey focuses on the foundations for fairness in recommendation literature with a focus on the taxonomies of current fairness definitions, the typical techniques for improving fairness, as well as the datasets for fairness studies in recommendation.

A Survey on Trustworthy Recommender Systems

This survey will introduce techniques related to trustworthy and responsible recommendation, including but not limited to explainable recommendation, fairness in recommendation, privacy-aware recommendation, robustness in recommendation; as well as the relationship between these different perspectives in terms of trustworthy andresponsible recommendation.

Explainable Fairness in Recommendation

A Counterfactual Explainable Fairness framework is proposed, called CEF, which generates explanations about model fairness that can improve the fairness without significantly hurting the performance and guides the design of fair recommender systems with a more informed and unified methodology.

Learning Vector-Quantized Item Representation for Transferable Sequential Recommenders

VQ-Rec is proposed, a novel approach to learning Vector-Quantized item representations for transferable sequential Recommender using semi-synthetic and mixed-domain code representations as hard negatives and a new cross-domain fine-tuning method based on a differentiable permutation-based network.

Assessing Combinational Generalization of Language Models in Biased Scenarios

The results show that PLMs are able to overcome such distribution shifts for specific tasks and with sufficient data, and find that overfitting can lead the models to depend more on biases for prediction, thus hurting the combinational generalization ability of PLMs.

Data-Efficient Concept Extraction from Pre-trained Language Models for Commonsense Explanation Generation

A method to predict the concept from pre-trained language models for commonsense explanation generation by designing a metric to evaluate the retrieved concepts and showing the correlation between this metric and the performance of the generators, and the importance of attaching concepts for generating high-quality sentences.

References

SHOWING 1-10 OF 85 REFERENCES

Personalized Transformer for Explainable Recommendation

A PErsonalized Transformer for Explainable Recommendation (PETER), on which a simple and effective learning objective is designed that utilizes the IDs to predict the words in the target explanation, so as to endow the IDs with linguistic meanings and to achieve personalized Transformer.

Learning How to Ask: Querying LMs with Mixtures of Soft Prompts

This work explores the idea of learning prompts by gradient descent—either fine-tuning prompts taken from previous work, or starting from random initialization, showing that the implicit factual knowledge in language models was previously underestimated.

Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer

This systematic study compares pre-training objectives, architectures, unlabeled datasets, transfer approaches, and other factors on dozens of language understanding tasks and achieves state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more.

Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing

A unified set of mathematical notations that can cover a wide variety of existing work, and organize existing work along several dimensions, e.g. the choice of pre-trained language models, prompts, and tuning strategies are described.

Prefix-Tuning: Optimizing Continuous Prompts for Generation

Prefix-tuning is proposed, a lightweight alternative to fine- Tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous task-specific vectors, which is called the prefix.

Language Models are Unsupervised Multitask Learners

It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations.

Generate Neural Template Explanations for Recommendation

Experimental results on real-world datasets show that NETE consistently outperforms state-of-the-art explanation generation approaches in terms of sentence quality and expressiveness and analysis on case study shows the advantages of NETE on generating diverse and controllable explanations.

Learning to Prompt for Vision-Language Models

Context Optimization (CoOp) is proposed, a simple approach specifically for adapting CLIP-like vision-language models for downstream image recognition and achieves superb domain generalization performance compared with the zero-shot model using hand-crafted prompts.

Unifying Vision-and-Language Tasks via Text Generation

This work proposes a unified framework that learns different tasks in a single architecture with the same language modeling objective, i.e., multimodal conditional text generation, where the models learn to generate labels in text based on the visual and textual inputs.

Multitask Prompted Training Enables Zero-Shot Task Generalization

A system for easily mapping any natural language tasks into a human-readable prompted form and fine-tune a pretrained encoder-decoder model on this multitask mixture covering a wide variety of tasks.
...