SelfCF: A Simple Framework for Self-supervised Collaborative Filtering
@article{Zhou2021SelfCFAS, title={SelfCF: A Simple Framework for Self-supervised Collaborative Filtering}, author={Xin Zhou and Aixin Sun and Yong Liu and Jie Zhang and Chunyan Miao}, journal={ArXiv}, year={2021}, volume={abs/2107.03019} }
Collaborative filtering (CF) is widely used to learn informative latent representations of users and items from observed interactions. Existing CF-based methods commonly adopt negative sampling to discriminate different items. That is, observed user-item pairs are treated as positive instances; unobserved pairs are considered as negative instances and are sampled under a defined distribution for training. Training with negative sampling on large datasets is computationally expensive. Further…
Figures and Tables from this paper
12 Citations
Self-Supervised Learning for Recommender Systems: A Survey
- Computer ScienceArXiv
- 2022
An exclusive definition of SSR is proposed, on top of which a comprehensive taxonomy is built to divide existing SSR methods into four categories: contrastive, generative, predictive, and hybrid.
Revisiting Negative Sampling VS. Non-Sampling in Implicit Recommendation
- Computer ScienceACM Transactions on Information Systems
- 2022
The results empirically show that although negative sampling has been widely applied to recent recommendation models, it is non-trivial for uniform sampling methods to show comparable performance to non-sampling learning methods.
XSimGCL: Towards Extremely Simple Graph Contrastive Learning for Recommendation
- Computer ScienceArXiv
- 2022
It is revealed that CL enhances recommendation through endowing the model with the ability to learn more evenly distributed user/item representations, which can implicitly alleviate the pervasive popularity bias and promote long-tail items.
Bootstrap Latent Representations for Multi-modal Recommendation
- Computer ScienceArXiv
- 2022
A novel self-supervised multi-modal recommendation model, dubbed BM3, which requires neither augmentations from auxiliary graphs nor negative samples and alleviates both the need for contrasting with negative examples and the complex graph augmentation from an additional target network for contrastive view generation is proposed.
Are Graph Augmentations Necessary?: Simple Graph Contrastive Learning for Recommendation
- Computer ScienceSIGIR
- 2022
A simple CL method is proposed which discards the graph augmentations and instead adds uniform noises to the embedding space for creating contrastive views that can smoothly adjust the uniformity of learned representations and has distinct advantages over its graph augmentation-based counterparts in terms of recommendation accuracy and training efficiency.
A Tale of Two Graphs: Freezing and Denoising Graph Structures for Multimodal Recommendation
- Computer Science
- 2022
This work argues the latent graph structure learning of LATTICE is both inefficient and unnecessary, and proposes a simple yet effective model, dubbed as FREEDOM, that FREEzes the item-item graph and Denoises the user-item interaction graph simultaneously for multimodal recommendation.
CrossCBR: Cross-view Contrastive Learning for Bundle Recommendation
- Computer ScienceKDD
- 2022
This work proposes to model the cooperative association between the two different views through cross-view contrastive learning by encouraging the alignment of the two separately learned views, so that each view can distill complementary information from the other view, achieving mutual enhancement.
Graph Augmentation-Free Contrastive Learning for Recommendation
- Computer ScienceArXiv
- 2021
A graph augmentation-free CL method to simply adjust the uniformity of the learned user/item representation distributions on the unit hypersphere by adding uniform noises to the original representations for data augmentations, and enhance recommendation from a geometric view is proposed.
CL4CTR: A Contrastive Learning Framework for CTR Prediction
- Computer ScienceArXiv
- 2022
This paper proposes a model-agnostic Contrastive Learning for CTR (CL4CTR) framework consisting of three self-supervised learning signals to regularize the feature representation learning: contrastive loss, feature alignment, and field uniformity.
Layer-refined Graph Convolutional Networks for Recommendation
- Computer ScienceArXiv
- 2022
A layer-refined GCN model, dubbed LayerGCN, is proposed that prunes the edges of the user-item interaction graph following a degree-sensitive probability instead of the uniform distribution and outperforms the state- of-the-art models on four public datasets with fast training convergence.
References
SHOWING 1-10 OF 57 REFERENCES
Bootstrapping User and Item Representations for One-Class Collaborative Filtering
- Computer ScienceSIGIR
- 2021
This paper proposes a novel OCCF framework, named as BUIR, which does not require negative sampling, and adopts two distinct encoder networks that learn from each other; the first encoder is trained to predict the output of the secondEncoder as its target, while the second encoder provides the consistent targets by slowly approximating the firstEncoder.
Optimizing top-n collaborative filtering via dynamic negative item sampling
- Computer ScienceSIGIR
- 2013
This paper proposes to dynamically choose negative training samples from the ranked list produced by the current prediction model and iteratively update the model, showing that this approach not only reduces the training time, but also leads to significant performance gains.
Collaborative Deep Learning for Recommender Systems
- Computer ScienceKDD
- 2015
A hierarchical Bayesian model called collaborative deep learning (CDL), which jointly performs deep representation learning for the content information and collaborative filtering for the ratings (feedback) matrix is proposed, which can significantly advance the state of the art.
Self-Supervised Learning for Recommender Systems: A Survey
- Computer ScienceArXiv
- 2022
An exclusive definition of SSR is proposed, on top of which a comprehensive taxonomy is built to divide existing SSR methods into four categories: contrastive, generative, predictive, and hybrid.
Revisiting Negative Sampling VS. Non-Sampling in Implicit Recommendation
- Computer ScienceACM Transactions on Information Systems
- 2022
The results empirically show that although negative sampling has been widely applied to recent recommendation models, it is non-trivial for uniform sampling methods to show comparable performance to non-sampling learning methods.
Self-supervised Graph Learning for Recommendation
- Computer ScienceSIGIR
- 2021
This work explores self-supervised learning on user-item graph, so as to improve the accuracy and robustness of GCNs for recommendation, and implements it on the state-of-the-art model LightGCN, which has the ability of automatically mining hard negatives.
Neural Graph Collaborative Filtering
- Computer ScienceSIGIR
- 2019
This work develops a new recommendation framework Neural Graph Collaborative Filtering (NGCF), which exploits the user-item graph structure by propagating embeddings on it, effectively injecting the collaborative signal into the embedding process in an explicit manner.
Efficient Non-Sampling Factorization Machines for Optimal Context-Aware Recommendation
- Computer ScienceWWW
- 2020
This paper designs a new ideal framework named Efficient Non-Sampling Factorization Machines (ENSFM), which not only seamlessly connects the relationship between FM and Matrix Factorization (MF), but also resolves the challenging efficiency issue via novel memorization strategies.
Factorization meets the neighborhood: a multifaceted collaborative filtering model
- Computer ScienceKDD
- 2008
The factor and neighborhood models can now be smoothly merged, thereby building a more accurate combined model and a new evaluation metric is suggested, which highlights the differences among methods, based on their performance at a top-K recommendation task.
NAIS: Neural Attentive Item Similarity Model for Recommendation
- Computer ScienceIEEE Transactions on Knowledge and Data Engineering
- 2018
This work proposes a neural network model named Neural Attentive Item Similarity model (NAIS), which is the first attempt that designs neural network models for item-based CF, opening up new research possibilities for future developments of neural recommender systems.