A Model of Two Tales: Dual Transfer Learning Framework for Improved Long-tail Item Recommendation

@article{Zhang2021AMO,
  title={A Model of Two Tales: Dual Transfer Learning Framework for Improved Long-tail Item Recommendation},
  author={Yin Zhang and Derek Zhiyuan Cheng and Tiansheng Yao and Xinyang Yi and Lichan Hong and Ed H. Chi},
  journal={Proceedings of the Web Conference 2021},
  year={2021}
}
Highly skewed long-tail item distribution is very common in recommendation systems. It significantly hurts model performance on tail items. To improve tail-item recommendation, we conduct research to transfer knowledge from head items to tail items, leveraging the rich user feedback in head items and the semantic connections between head and tail items. Specifically, we propose a novel dual transfer learning framework that jointly learns the knowledge transfer from both model-level and itemโ€ฆย 

Figures and Tables from this paper

Intent Disentanglement and Feature Self-supervision for Novel Recommendation
TLDR
This work discloses the mechanism that drives a userโ€™s interaction towards popular or niche items by disentangling her intent into conformity influence (popularity) and personal interests (preference) and presents a unified end-to-end framework to simultaneously optimize accuracy and novelty targets based on the disentangled intent of popularity and that of preference.
Self-Supervised Hypergraph Transformer for Recommender Systems
TLDR
SHT, a novel Self-Supervised Hypergraph Transformer framework (SHT) which augments user representations by exploring the global collaborative relationships in an explicit way, is proposed for data augmentation over the user-item interaction graph, so as to enhance the robustness of recommender systems.
Harmless Transfer Learning for Item Embeddings
TLDR
This work proposes a harmless transfer learning framework that limits the impact of the potential biases in both the definition and optimization of the transfer loss, and uses a lexicographic optimization framework to efficiently incorporate the information of thetransfer loss without hurting the minimization of the main prediction loss function.
Co-training Disentangled Domain Adaptation Network for Leveraging Popularity Bias in Recommenders
TLDR
A co-training disentangled domain adaptation network (CD$^2$AN), which can co-train both biased and unbiased models and outperforms the existing debiased solutions on popularity distribution shift and long-tail distribution shift.
ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural Networks via Normalization
TLDR
This paper proposes a novel normalization method for GNNs, termed ResNorm (Reshaping the long-tailed distribution into a normal-like distribution via normalization), and designs a shift operation for ResNorm that simulates the degree-specific parameter strategy in a low-cost manner.
Feature-aware Diversified Re-ranking with Disentangled Representations for Relevant Recommendation
TLDR
A Feature Disentanglement Self-Balancing Re-ranking framework (FDSB) to capture feature- aware diversity and significant improvements on both recommendation quality and user experience verify the effectiveness of the approach.
Deep Meta-learning in Recommendation Systems: A Survey
TLDR
A taxonomy is proposed to discuss existing methods according to recommendation scenarios, meta- learning techniques, and meta-knowledge representations, which could provide the design space for meta-learning based recommendation methods.
Improving Item Cold-start Recommendation via Model-agnostic Conditional Variational Autoencoder
TLDR
This paper attempts to tackle the item cold-start problem by generating enhanced warmed-up ID embeddings for cold items with historical data and limited interaction records using a model-agnostic Conditional Variational Autoencoder based Recommendation(CVAR) framework.
A Survey on Long-Tailed Visual Recognition
TLDR
This survey focuses on the problems caused by long-tailed data distribution, sort out the representative long-tails visual recognition datasets and summarize some mainstream long-tail studies, and quantitatively study 20 widely-used and large-scale visual datasets proposed in the last decade.
Interpolative Distillation for Unifying Biased and Debiased Recommendation
TLDR
An Interpolative Distillation framework is proposed, which interpolates the biased and debiased models at user-item pair level by distilling a student model, which stands out on both tests and demonstrates remarkable gains on less popular items.
...
...

References

SHOWING 1-10 OF 64 REFERENCES
Meta-learning on Heterogeneous Information Networks for Cold-start Recommendation
TLDR
This work proposes a novel semantic-enhanced tasks constructor and a co-adaptation meta-learner to address the two questions for how to capture HIN-based semantics in the meta-learning setting, and how to learn the general knowledge that can be easily adapted to multifaceted semantics.
Learning to Model the Tail
TLDR
Results on image classification datasets (SUN, Places, and ImageNet) tuned for the long-tailed setting, that significantly outperform common heuristics, such as data resampling or reweighting.
Long-tail Session-based Recommendation
TLDR
A novel network architecture, namely TailNet, is proposed and applied in TailNet to improve long-tail recommendation performance, while maintaining competitive accuracy performance compared with other methods.
Sampling-bias-corrected neural modeling for large corpus item recommendations
TLDR
A novel algorithm for estimating item frequency from streaming data that can work without requiring fixed item vocabulary, and is capable of producing unbiased estimation and being adaptive to item distribution change.
Addressing the Item Cold-Start Problem by Attribute-Driven Active Learning
TLDR
This paper designs useful user selection criteria based on itemsโ€™ attributes and usersโ€™ rating history, and combines the criteria in an optimization framework for selecting users, and generates accurate rating predictions for the other unselected users.
MAMO: Memory-Augmented Meta-Optimization for Cold-start Recommendation
TLDR
Two memory matrices are designed that can store task-specific memories and feature- specific memories that are used to guide the model with personalized parameter initialization and predicting the user preference and a meta-optimization approach is adopted for optimizing the proposed method.
MeLU: Meta-Learned User Preference Estimator for Cold-Start Recommendation
TLDR
A meta-learning-based recommender system called MeLU that can estimate new user's preferences with a few consumed items and provides an evidence candidate selection strategy that determines distinguishing items for customized preference estimation is proposed.
Mixed Negative Sampling for Learning Two-tower Neural Networks in Recommendations
TLDR
This paper showcases how to apply a two-tower neural network framework, which is also known as dual encoder in the natural language community, to improve a large-scale, production app recommendation system and offers a novel negative sampling approach called Mixed Negative Sampling (MNS).
The long tail of recommender systems and how to leverage it
TLDR
This paper splits the whole itemset into the head and the tail parts and clusters only the tail items, and shows that this reduces the recommendation error rates for the Tail items, while maintaining reasonable computational performance.
Challenging the Long Tail Recommendation
TLDR
Empirical experiments show that the proposed algorithms are effective to recommend long tail items and outperform state-of-the-art recommendation techniques.
...
...