• Corpus ID: 239998170

Revisiting the Performance of iALS on Item Recommendation Benchmarks

@article{Rendle2021RevisitingTP,
  title={Revisiting the Performance of iALS on Item Recommendation Benchmarks},
  author={Steffen Rendle and Walid Krichene and Li Zhang and Yehuda Koren},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.14037}
}
Matrix factorization learned by implicit alternating least squares (iALS) is a popular baseline in recommender system research publications. iALS is known to be one of the most computationally efficient and scalable collaborative filtering methods. However, recent studies suggest that its prediction quality is not competitive with the current state of the art, in particular autoencoders and other item-based collaborative filtering methods. In this work, we revisit the iALS algorithm and present… 
2 Citations

Figures and Tables from this paper

iALS++: Speeding up Matrix Factorization with Subspace Optimization
TLDR
A new solver iALS++ is proposed that combines the advantages of iALS in terms of vector processing with a low computational complexity as in iCD and can solve benchmark problems like Movielens 20M or Million Song Dataset even for 1000 dimensional embedding vectors in a few minutes.
ALX: Large Scale Matrix Factorization on TPUs
We present ALX, an open-source library for distributed matrix factorization using Alternating Least Squares, written in JAX. Our design allows for efficient use of the TPU architecture and scales

References

SHOWING 1-10 OF 31 REFERENCES
Fast Matrix Factorization for Online Recommendation with Implicit Feedback
TLDR
A new learning algorithm based on the element-wise Alternating Least Squares (eALS) technique is designed, for efficiently optimizing a Matrix Factorization (MF) model with variably-weighted missing data and exploiting this efficiency to then seamlessly devise an incremental update strategy that instantly refreshes a MF model given new feedback.
Reenvisioning the comparison between Neural Collaborative Filtering and Matrix Factorization
TLDR
This is the first work, to the best of the knowledge, where several complementary evaluation dimensions have been explored for an array of state-of-the-art algorithms covering recent adaptations of ANNs and MF, and aims to show the potential these techniques may have on beyond-accuracy evaluation while analyzing the effect on reproducibility these complementary dimensions may spark.
On the Difficulty of Evaluating Baselines: A Study on Recommender Systems
TLDR
It is shown that running baselines properly is difficult and empirical findings in research papers are questionable unless they were obtained on standardized benchmarks where baselines have been tuned extensively by the research community.
Collaborative Denoising Auto-Encoders for Top-N Recommender Systems
TLDR
It is demonstrated that the proposed model is a generalization of several well-known collaborative filtering models but with more flexible components, and that CDAE consistently outperforms state-of-the-art top-N recommendation methods on a variety of common evaluation metrics.
Towards a Better Understanding of Linear Models for Recommendation
TLDR
Through the derivation and analysis of the closed-form solutions for two basic regression and matrix factorization approaches, it is found these two approaches are indeed inherently related but also diverge in how they "scale-down" the singular values of the original user-item interaction matrix.
Large-Scale Parallel Collaborative Filtering for the Netflix Prize
TLDR
This paper describes a CF algorithm alternating-least-squares with weighted-?-regularization(ALS-WR), which is implemented on a parallel Matlab platform and shows empirically that the performance of ALS-WR monotonically improves with both the number of features and thenumber of ALS iterations.
Advances in Collaborative Filtering
The collaborative filtering (CF) approach to recommenders has recently enjoyed much interest and progress. The fact that it played a central role within the recently completed Netflix competition has
Neural Collaborative Filtering vs. Matrix Factorization Revisited
TLDR
It is shown that with a proper hyperparameter selection, a simple dot product substantially outperforms the proposed learned similarities and that MLPs should be used with care as embedding combiner and that dot products might be a better default choice.
SLIM: Sparse Linear Methods for Top-N Recommender Systems
  • Xia Ning, G. Karypis
  • Computer Science, Mathematics
    2011 IEEE 11th International Conference on Data Mining
  • 2011
TLDR
A novel Sparse Linear Method (SLIM) is proposed, which generates top-N recommendations by aggregating from user purchase/rating profiles and a sparse aggregation coefficient matrix W is learned from SLIM by solving an `1-norm and `2-norm regularized optimization problem.
Factorization meets the neighborhood: a multifaceted collaborative filtering model
TLDR
The factor and neighborhood models can now be smoothly merged, thereby building a more accurate combined model and a new evaluation metric is suggested, which highlights the differences among methods, based on their performance at a top-K recommendation task.
...
1
2
3
4
...