• Corpus ID: 239049523

Learning to Recommend Using Non-Uniform Data

@article{Chen2021LearningTR,
  title={Learning to Recommend Using Non-Uniform Data},
  author={Wan-Ping Chen and Mohsen Bayati},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.11248}
}
Learning user preferences for products based on their past purchases or reviews is at the cornerstone of modern recommendation engines. One complication in this learning task is that some users are more likely to purchase products or review them, and some products are more likely to be purchased or reviewed by the users. This non-uniform pattern degrades the power of many existing recommendation algorithms, as they assume that the observed data is sampled uniformly at random among user-product… 

Figures from this paper

References

SHOWING 1-10 OF 28 REFERENCES
A survey of matrix completion methods for recommendation systems
TLDR
This article presents a comprehensive survey of the matrix completion methods used in recommendation systems, focusing on the mathematical models for matrix completion and the corresponding computational algorithms as well as their characteristics and potential issues.
Probabilistic Matrix Factorization
TLDR
The Probabilistic Matrix Factorization (PMF) model is presented, which scales linearly with the number of observations and performs well on the large, sparse, and very imbalanced Netflix dataset and is extended to include an adaptive prior on the model parameters.
Missing Not at Random in Matrix Completion: The Effectiveness of Estimating Missingness Probabilities Under a Low Nuclear Norm Assumption
TLDR
This paper tackles MNAR matrix completion by solving a different matrix completion problem first that recovers missingness probabilities, and establishes finite-sample error bounds for how accurate these probability estimates are and how well these estimates debias standard matrix completion losses for the original matrix to be completed.
Amazon.com Recommendations: Item-to-Item Collaborative Filtering
TLDR
This work compares three common approaches to solving the recommendation problem: traditional collaborative filtering, cluster models, and search-based methods, and their algorithm, which is called item-to-item collaborative filtering.
A Missing Data Paradox for Nearest Neighbor Recommender Systems
TLDR
Given that some data must go missing, NMAR can often pick the "right" values to preserve (i.e. it preserves the more informative data).
Recommender Systems: The Textbook
TLDR
This book comprehensively covering the topic of recommender systems, which provide personalized recommendations of products or services to users based on their previous searches or purchases, synthesizes both fundamental and advanced topics of a research area that has now reached maturity.
On Low-rank Trace Regression under General Sampling Distribution
TLDR
A unifying technique for analyzing all of these problems via both estimators that leads to short proofs for the existing results as well as new results and proves similar error bounds when the regularization parameter is chosen via K-fold cross-validation.
Statistical Learning with Sparsity: The Lasso and Generalizations
TLDR
Statistical Learning with Sparsity: The Lasso and Generalizations presents methods that exploit sparsity to help recover the underlying signal in a set of data and extract useful and reproducible patterns from big datasets.
Rank penalized estimators for high-dimensional matrices
In this paper we consider the trace regression model. Assume that we observe a small set of entries or linear combinations of entries of an unknown matrix $A_0$ corrupted by noise. We propose a new
Restricted strong convexity and weighted matrix completion: Optimal bounds with noise
TLDR
The matrix completion problem under a form of row/column weighted entrywise sampling is considered, including the case of uniformentrywise sampling as a special case, and it is proved that with high probability, it satisfies a forms of restricted strong convexity with respect to weighted Frobenius norm.
...
1
2
3
...