What to Do Next: Modeling User Behaviors by Time-LSTM

@inproceedings{Zhu2017WhatTD,
  title={What to Do Next: Modeling User Behaviors by Time-LSTM},
  author={Y. Zhu and Hao Li and Yikang Liao and Beidou Wang and Ziyu Guan and Haifeng Liu and Deng Cai},
  booktitle={IJCAI},
  year={2017}
}
Recently, Recurrent Neural Network (RNN) solutions for recommender systems (RS) are becoming increasingly popular. [] Key Method Time-LSTM equips LSTM with time gates to model time intervals. These time gates are specifically designed, so that compared to the traditional RNN solutions, Time-LSTM better captures both of users' short-term and long-term interests, so as to improve the recommendation performance. Experimental results on two real-world datasets show the superiority of the recommendation method…

Figures and Tables from this paper

Time-aware sequence model for next-item recommendation
TLDR
A novel sequential recommendation model, named Interval- and Duration-aware LSTM with Embedding layer and Coupled input and forget gate (IDLSTM-EC), which leverages time interval and duration information to accurately capture users’ long-term and short-term preferences.
Attention with Long-Term Interval-Based Gated Recurrent Units for Modeling Sequential User Behaviors
TLDR
A network featuring Attention with Long-term Interval-based Gated Recurrent Units (ALI-GRU) to model temporal sequences of user actions and a specially matrix-form attention function is designed to learn weights of both long-term preferences and short-term user intents automatically.
TiSSA: A Time Slice Self-Attention Approach for Modeling Sequential User Behaviors
TLDR
A novel Time Slice Self-Attention mechanism into RNNs for better modeling sequential user behaviors, which utilizes the time-interval-based gated recurrent units to exploit the temporal dimension when encoding user actions, and has a specially designed time slice hierarchical self-attention function.
Attention with Long-Term Interval-Based Deep Sequential Learning for Recommendation
TLDR
A network featuring Attention with Long-term Interval-based Gated Recurrent Units (ALI-GRU) to model temporal sequences of user actions and achieves significant improvement compared to state-of-the-art RNN-based methods.
Adaptive User Modeling with Long and Short-Term Preferences for Personalized Recommendation
TLDR
An attention-based framework to combine users' long-term and short-term preferences, thus users' representation can be generated adaptively according to the specific context is proposed and outperforms several state-of-art methods consistently.
Time is of the Essence: A Joint Hierarchical RNN and Point Process Model for Time and Item Predictions
TLDR
The experimental results indicate that the proposed model improves recommendations significantly on two datasets over a strong baseline, while simultaneously improving return- time predictions over a baseline return-time prediction model.
Sequential Recommender via Time-aware Attentive Memory Network
TLDR
A temporal gating methodology to improve attention mechanism and recurrent units, so that temporal information can be considered in both information filtering and state transition and a hybrid sequential recommender, named Multi-hop Time-aware Attentive Memory network (MTAM), to integrate long-term and short-term preferences.
Time Matters: Sequential Recommendation with Complex Temporal Information
TLDR
This paper discovers two elementary temporal patterns of user behaviors and devise a neural architecture that jointly learns those temporal patterns to model user dynamic preferences and demonstrates the superiority of the model compared with the state-of-the-arts.
Where to Go Next: Modeling Long- and Short-Term User Preferences for Point-of-Interest Recommendation
TLDR
This work proposes a novel method named Long- and Short-Term Preference Modeling (LSTPM) for next-POI recommendation that consists of a nonlocal network for long-term preference modeling and a geo-dilated RNN for short- term preference learning.
PSTIE: Time Information Enhanced Personalized Search
TLDR
PSTIE, a fine-grained Time Information Enhanced model to construct more accurate user interest representations for Personalized Search, is proposed and experiments show that PSTIE can effectively improve the ranking quality over state-of-the-art models.
...
...

References

SHOWING 1-10 OF 38 REFERENCES
A Dynamic Recurrent Model for Next Basket Recommendation
TLDR
This work proposes a novel model, Dynamic REcurrent bAsket Model (DREAM), based on Recurrent Neural Network (RNN), which not only learns a dynamic representation of a user but also captures global sequential features among baskets.
Parallel Recurrent Neural Network Architectures for Feature-rich Session-based Recommendations
TLDR
It is shown that p-RNN architectures with proper training have significant performance improvements over feature-less session models while all session-based models outperform the item-to-item type baseline.
Phased LSTM: Accelerating Recurrent Network Training for Long or Event-based Sequences
TLDR
This work introduces the Phased LSTM model, which extends the L STM unit by adding a new time gate, controlled by a parametrized oscillation with a frequency range which require updates of the memory cell only during a small percentage of the cycle.
Improved Recurrent Neural Networks for Session-based Recommendations
TLDR
This work proposes the application of two techniques to improve RNN-based models for session-based recommendations performance, namely, data augmentation, and a method to account for shifts in the input data distribution.
Long Short-Term Memory
TLDR
A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Adaptation and Evaluation of Recommendations for Short-term Shopping Goals
TLDR
The results indicate that maintaining short-term content-based and recency-based profiles of the visitors can lead to significant accuracy increases and show that the choice of the algorithm for learning the long-term preferences is particularly important at the beginning of new shopping sessions.
User Preference Learning for Online Social Recommendation
TLDR
This paper presents a new framework of online social recommendation from the viewpoint of online graph regularized user preference learning (OGRPL), which incorporates both collaborative user-item relationship as well as item content features into an unified preference learning process and develops an efficient iterative procedure, OGRPL-FW, to solve the proposed online optimization problem.
Recurrent nets that time and count
  • F. Gers, J. Schmidhuber
  • Computer Science
    Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks. IJCNN 2000. Neural Computing: New Challenges and Perspectives for the New Millennium
  • 2000
TLDR
Surprisingly, LSTM augmented by "peephole connections" from its internal cells to its multiplicative gates can learn the fine distinction between sequences of spikes separated by either 50 or 49 discrete time steps, without the help of any short training exemplars.
BPR: Bayesian Personalized Ranking from Implicit Feedback
TLDR
This paper presents a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem and provides a generic learning algorithm for optimizing models with respect to B PR-Opt.
Mobile Query Recommendation via Tensor Function Learning
TLDR
This paper introduces the problem of query recommendation on mobile devices and model the user-location- query relations with a tensor representation via tensor function learning and develops an efficient alternating direction method of multipliers (ADMM) scheme to solve the introduced problem.
...
...