Common Pitfalls in Training and Evaluating Recommender Systems

  title={Common Pitfalls in Training and Evaluating Recommender Systems},
  author={Hung-Hsuan Chen and Chu-An Chung and Hsin-Chien Huang and Wen Tsui},
  journal={SIGKDD Explor.},
  • Hung-Hsuan Chen, Chu-An Chung, +1 author Wen Tsui
  • Published 2017
  • Computer Science
  • SIGKDD Explor.
  • This paper formally presents four common pitfalls in training and evaluating recommendation algorithms for information systems. Specifically, we show that it could be problematic to separate the server logs into training and test data for model generation and model evaluation if the training and the test data are selected improperly. In addition, we show that click through rate { a common metric to measure and compare the performance of different recommendation algorithms -- may not be a good… CONTINUE READING
    5 Citations

    Figures, Tables, and Topics from this paper.

    On Offline Evaluation of Recommender Systems
    • 1
    • PDF
    Differentiating Regularization Weights -- A Simple Mechanism to Alleviate Cold Start in Recommender Systems
    • 1
    Behavior2Vec: Generating Distributed Representations of Users' Behaviors on Products for Recommender Systems
    • 8
    Online Indices for Predictive Top-k Entity and Aggregate Queries on Knowledge Graphs
    • Yan Li, T. Ge, Cindy Chen
    • Computer Science
    • 2020 IEEE 36th International Conference on Data Engineering (ICDE)
    • 2020
    Mining the BoardGameGeek