• Corpus ID: 239049664

A scale invariant ranking function for learning-to-rank: a real-world use case

  title={A scale invariant ranking function for learning-to-rank: a real-world use case},
  author={Alessio Petrozziello and Xiaoke Liu and Christian Sommeregger},
Nowadays, Online Travel Agencies provide the main service for booking holidays, business trips, accommodations, etc. As in many e-commerce services where users, items, and preferences are involved, the use of a Recommender System facilitates the navigation of the marketplaces. One of the main challenges when productizing machine learning models (and in this case, Learning-to-Rank models) is the need of, not only consistent pre-processing transformations, but also input features maintaining a… 

Figures and Tables from this paper


Real-time Personalization using Embeddings for Search Ranking at Airbnb
The embedding models were specifically tailored for Airbnb marketplace, and are able to capture guest's short-term and long-term interests, delivering effective home listing recommendations.
SoftRank: optimizing non-smooth rank metrics
This work presents a new family of training objectives that are derived from the rank distributions of documents, induced by smoothed scores, called SoftRank, and focuses on a smoothed approximation to Normalized Discounted Cumulative Gain (NDCG), called SoftNDCG.
Exploiting user feedback to learn to rank answers in q&a forums: a case study with stack overflow
The authors' L2R method was trained to learn the answer rating, based on the feedback users give to answers in Q&A forums, and was able to outperform a state of the art baseline with gains of up to 21% in NDCG, a metric used to evaluate rankings.
Learning Groupwise Multivariate Scoring Functions Using Deep Neural Networks
This work proposes a new framework for multivariate scoring functions, in which the relevance score of a document is determined jointly by multiple documents in the list, and refers to this framework as GSFs---groupwise scoring functions.
Learning to rank for information retrieval
Three major approaches to learning to rank are introduced, i.e., the pointwise, pairwise, and listwise approaches, the relationship between the loss functions used in these approaches and the widely-used IR evaluation measures are analyzed, and the performance of these approaches on the LETOR benchmark datasets is evaluated.
AdaRank: a boosting algorithm for information retrieval
The proposed novel learning algorithm, referred to as AdaRank, repeatedly constructs 'weak rankers' on the basis of reweighted training data and finally linearly combines the weak rankers for making ranking predictions, which proves that the training process of AdaRank is exactly that of enhancing the performance measure used.
Learning to rank: from pairwise approach to listwise approach
It is proposed that learning to rank should adopt the listwise approach in which lists of objects are used as 'instances' in learning, and introduces two probability models, respectively referred to as permutation probability and top k probability, to define a listwise loss function for learning.
Web-Search Ranking with Initialized Gradient Boosted Regression Trees
This paper investigates Random Forests as a low-cost alternative algorithm to Gradient Boosted Regression Trees (GBRT) (the de facto standard of web-search ranking) and provides an upper bound of the Expected Reciprocal Rank (Chapelle et al., 2009) in terms of classification error.
On Application of Learning to Rank for E-Commerce Search
The practical challenges in applying learning to rank methods to E-Com search are discussed, including the challenges in feature representation, obtaining reliable relevance judgments, and optimally exploiting multiple user feedback signals such as click rates, add-to-cart ratios, order rates, and revenue.
Position-Aware ListMLE: A Sequential Learning Process for Ranking
A new listwise ranking method, called position-aware ListMLE (p-ListMLE for short), which views the ranking problem as a sequential learning process, with each step learning a subset of parameters which maximize the corresponding stepwise probability distribution.