• Corpus ID: 246285492

Recency Dropout for Recurrent Recommender Systems

@article{Chang2022RecencyDF,
  title={Recency Dropout for Recurrent Recommender Systems},
  author={Bo-Yu Chang and Can Xu and Matt Le and Jingchen Feng and Ya Le and Sriraj Badam and Ed Chi and Minmin Chen},
  journal={ArXiv},
  year={2022},
  volume={abs/2201.11016}
}
Recurrent recommender systems have been successful in capturing the temporal dynamics in users’ activity trajectories. However, recurrent neural networks (RNNs) are known to have di culty learning long-term dependencies. As a consequence, RNN-based recommender systems tend to overly focus on short-term user interests. This is referred to as the recency bias, which could negatively a ect the long-term user experience as well as the health of the ecosystem. In this paper, we introduce the recency… 

Figures from this paper

References

SHOWING 1-10 OF 59 REFERENCES

Recurrent Recommender Networks

Recurrent Recommender Networks (RRN) are proposed that are able to predict future behavioral trajectories by endowing both users and movies with a Long Short-Term Memory (LSTM) autoregressive model that captures dynamics, in addition to a more traditional low-rank factorization.

Session-based Recommendations with Recurrent Neural Networks

It is argued that by modeling the whole session, more accurate recommendations can be provided by an RNN-based approach for session-based recommendations, and introduced several modifications to classic RNNs such as a ranking loss function that make it more viable for this specific problem.

Self-Attentive Sequential Recommendation

Extensive empirical studies show that the proposed self-attention based sequential model (SASRec) outperforms various state-of-the-art sequential models (including MC/CNN/RNN-based approaches) on both sparse and dense datasets.

Latent Cross: Making Use of Context in Recurrent Recommender Systems

This paper offers "Latent Cross," an easy-to-use technique to incorporate contextual data in the RNN by embedding the context feature first and then performing an element-wise product of the context embedding with model's hidden states.

Towards Neural Mixture Recommender for Long Range Dependent User Sequences

A neural Multi-temporal-range Mixture Model (M3) is proposed as a tailored solution to deal with both short-term and long-term dependencies and consistently outperforms state-of-the-art sequential recommendation methods.

Top-K Off-Policy Correction for a REINFORCE Recommender System

This work presents a general recipe of addressing biases in a production top-K recommender system at Youtube, built with a policy-gradient-based algorithm, i.e. REINFORCE, and proposes a noveltop-K off-policy correction to account for the policy recommending multiple items at a time.

Sequential Recommendation with User Memory Networks

A memory-augmented neural network (MANN) integrated with the insights of collaborative filtering for recommendation is designed, which store and update usersΒ» historical records explicitly, which enhances the expressiveness of the model.

Sequential Recommender System based on Hierarchical Attention Networks

A novel two-layer hierarchical attention network is proposed, which takes the above properties into account, to recommend the next item user might be interested and demonstrates the superiority of the method compared with other state-of-the-art ones.

Designing for serendipity in a university course recommendation system

This paper builds one set of models based on course catalog descriptions (BOW) and another set informed by enrollment histories (course2vec) and discusses the role of the kind of information presented by the system in a student's decision to accept a recommendation from either algorithm.

Factorized Recurrent Neural Architectures for Longer Range Dependence

This article provides a modified recurrent neural architecture that mitigates the issue of faulty memory through redundancy while keeping the compute time constant and enables better memorization and longer-term memory.
...