Factorization Meets the Item Embedding: Regularizing Matrix Factorization with Item Co-occurrence
@article{Liang2016FactorizationMT, title={Factorization Meets the Item Embedding: Regularizing Matrix Factorization with Item Co-occurrence}, author={Dawen Liang and Jaan Altosaar and Laurent Charlin and David M. Blei}, journal={Proceedings of the 10th ACM Conference on Recommender Systems}, year={2016} }
Matrix factorization (MF) models and their extensions are standard in modern recommender systems. [] Key Method CoFactor is inspired by the recent success of word embedding models (e.g., word2vec) which can be interpreted as factorizing the word co-occurrence matrix. We show that this model significantly improves the performance over MF models on several datasets with little additional computational overhead. We provide qualitative results that explain how CoFactor improves the quality of the inferred…
238 Citations
Metric Factorization with Item Cooccurrence for Recommendation
- Computer ScienceSymmetry
- 2020
A novel recommendation model, namely, metric factorization with item cooccurrence for recommendation (MFIC), which uses the Euclidean distance to jointly decompose the user–item interaction matrix and the item–item co Occurrence with shared latent factors.
Exploiting user and item embedding in latent factor models for recommendations
- Computer ScienceWI
- 2017
This paper proposes mixture models which combine the technology of MF and the embedding, and shows that some of these models significantly improve the performance over the state-of-the-art models on two real-world datasets.
Co-Factorization Model for Collaborative Filtering with Session-based Data
- Computer ScienceArXiv
- 2021
This work proposes a method for matrix factorization that can reflect the localized relationships between strong related items into the latent representations of items and is able to exploit item-item relations.
Evaluating and improving the interpretability of item embeddings using item-tag relevance information
- Computer Science, PsychologyFrontiers of Computer Science
- 2019
This paper proposes a tag-informed item embedding (TIE) model that jointly factorizes the user-item interaction matrix, the item-item co-occurrence matrix and theitem-tag relevance matrix with shared item embeddings so that different forms of information can co-operate with each other to learn better item embeddeddings.
Effective metric learning with co-occurrence embedding for collaborative recommendations
- Computer ScienceNeural Networks
- 2020
Collaborative Item Embedding Model for Implicit Feedback Data
- Computer ScienceICWE
- 2017
This work proposes a method to extract the relationships between items and embed them into the latent vectors of the factorization model, which combines two worlds: matrix factorization for collaborative filtering and item embedding, a similar concept to word embedding in language processing.
On the instability of embeddings for recommender systems: the case of matrix factorization
- Computer ScienceSAC
- 2021
A generalization of MF is presented, called Nearest Neighbors Matrix Factorization (NNMF), which propagates the information about items and users to their neighbors, speeding up the training procedure and extending the amount of information that supports recommendations and representations.
Latent Factor Model with User and Fused Item Embeddings for Recommendation
- Computer Science2020 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA)
- 2020
Different from the existing embedding based LFMs, the proposed model learns item latent representations from the combination of co-liked item co-occurrence matrix (LICM) and co-disliked itemCo-occurring matrix (DICM), which reduces the trainable parameters and brings a quick convergence.
Analyzing and improving stability of matrix factorization for recommender systems
- Computer ScienceJournal of Intelligent Information Systems
- 2022
This paper focuses on the effects of training the same model on the same data, but with different initial values for the latent representations of users and items, and presents a generalization of MF called Nearest Neighbors Matrix Factorization (NNMF), which largely improves the stability of both representations and recommendations.
Embedding Factorization Models for Jointly Recommending Items and User Generated Lists
- Computer ScienceSIGIR
- 2017
Embedded factorization models are devised, which extend traditional factorization method by incorporating item-item (item-item-list) co-occurrence with embedding-based algorithms and are capable of solving the new-item cold-start problem, where items have been consumed by users but exist in user generated lists.
References
SHOWING 1-10 OF 26 REFERENCES
Hidden factors and hidden topics: understanding rating dimensions with review text
- Computer ScienceRecSys
- 2013
This paper aims to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics ( such as those learned by topic models like LDA), which more accurately predicts product ratings by harnessing the information present in review text.
Relational learning via collective matrix factorization
- Computer ScienceKDD
- 2008
This model generalizes several existing matrix factorization methods, and therefore yields new large-scale optimization algorithms for these problems, which can handle any pairwise relational schema and a wide variety of error models.
Neural Word Embedding as Implicit Matrix Factorization
- Computer ScienceNIPS
- 2014
It is shown that using a sparse Shifted Positive PMI word-context matrix to represent words improves results on two word similarity tasks and one of two analogy tasks, and conjecture that this stems from the weighted nature of SGNS's factorization.
Factorization Machines
- Computer Science2010 IEEE International Conference on Data Mining
- 2010
Factorization Machines (FM) are introduced which are a new model class that combines the advantages of Support Vector Machines (SVM) with factorization models and can mimic these models just by specifying the input data (i.e. the feature vectors).
Generalized Probabilistic Matrix Factorizations for Collaborative Filtering
- Computer Science2010 IEEE International Conference on Data Mining
- 2010
It is illustrated that simpler models directly capturing correlations among latent factors can outperform existing PMF models, side information can benefit prediction accuracy, and accounting for row/column biases leads to improvements in predictive performance.
Latent Trajectory Modeling: A Light and Efficient Way to Introduce Time in Recommender Systems
- Computer ScienceRecSys
- 2015
It is proposed to learn item and user representations such that any timely ordered sequence of items selected by a user will be represented as a trajectory of the user in a representation space to perform rating prediction using a classical matrix factorization scheme.
Regression-based latent factor models
- Computer ScienceKDD
- 2009
A novel latent factor model to accurately predict response for large scale dyadic data in the presence of features is proposed and induces a stochastic process on the dyadic space with kernel given by a polynomial function of features.
BPR: Bayesian Personalized Ranking from Implicit Feedback
- Computer ScienceUAI
- 2009
This paper presents a generic optimization criterion BPR-Opt for personalized ranking that is the maximum posterior estimator derived from a Bayesian analysis of the problem and provides a generic learning algorithm for optimizing models with respect to B PR-Opt.
VBPR: Visual Bayesian Personalized Ranking from Implicit Feedback
- Computer ScienceAAAI
- 2016
This paper proposes a scalable factorization model to incorporate visual signals into predictors of people's opinions, which is applied to a selection of large, real-world datasets and makes use of visual features extracted from product images using (pre-trained) deep networks.
Probabilistic Matrix Factorization
- Computer ScienceNIPS
- 2007
The Probabilistic Matrix Factorization (PMF) model is presented, which scales linearly with the number of observations and performs well on the large, sparse, and very imbalanced Netflix dataset and is extended to include an adaptive prior on the model parameters.