• Corpus ID: 16072132

Combination of Diverse Ranking Models for Personalized Expedia Hotel Searches

  title={Combination of Diverse Ranking Models for Personalized Expedia Hotel Searches},
  author={Xudong Liu and Bin Xu and Yuyu Zhang and Qiang Yan and Liang Pang and Qiang Li and Hanxiao Sun and Bin Wang},
The ICDM Challenge 2013 is to apply machine learning to the problem of hotel ranking, aiming to maximize purchases according to given hotel characteristics, location attractiveness of hotels, user's aggregated purchase history and competitive online travel agency information for each potential hotel choice. This paper describes the solution of team "binghsu & MLRush & BrickMover". We conduct simple feature engineering work and train different models by each individual team member. Afterwards… 
4 Citations

Figures and Tables from this paper

Learn to Rank ICDM 2013 Challenge Ranking Hotel Search Queries

This work proposes to use the framework from the logistic regression binary classification algorithm to describe a straightforward linear model for ranking, and considers hotel search and click-through data provided by the popular travel website Expedia.com.

Hotel Recommendation System

The aim of this hotel recommendation task is to predict and recommend five hotel clusters to a user that he/she is more likely to book given hundred distinct clusters.

Identifying Consumer-Welfare Changes when Online Search Platforms Change Their List of Search Results

A search-platform experiment is used to determine how to measure the effects of search responses on consumer welfare and it is found that under the random listing, the welfare of the users of the online travel agency is lowered by an average of $8.84 per user relative to Expedia's own ranking system.

Factorization Machines

An experiment is carried out to show that Factorization Machines outperform some other machine learning models, and using the learning approach Alternating Least-Squares and increasing the value of the number of dimensions of the latent parameter vector gives the best performance.



Adapting boosting for information retrieval measures

This work presents a new ranking algorithm that combines the strengths of two previous methods: boosted tree classification, and LambdaRank, and shows how to find the optimal linear combination for any two rankers, and uses this method to solve the line search problem exactly during boosting.

AdaRank: a boosting algorithm for information retrieval

The proposed novel learning algorithm, referred to as AdaRank, repeatedly constructs 'weak rankers' on the basis of reweighted training data and finally linearly combines the weak rankers for making ranking predictions, which proves that the training process of AdaRank is exactly that of enhancing the performance measure used.

Horizontal and Vertical Ensemble with Deep Representation for Classification

Horizontal Voting Vertical Voting and Horizontal Stacked Ensemble methods to improve the classication performance of deep neural networks are proposed.

From RankNet to LambdaRank to LambdaMART: An Overview

RankNet, LambdaRank, and LambdaMART have proven to be very successful algorithms for solving real world ranking problems and the details are spread across several papers and reports, so here is a self-contained, detailed and complete description of them.

Learning to rank for information retrieval

Three major approaches to learning to rank are introduced, i.e., the pointwise, pairwise, and listwise approaches, the relationship between the loss functions used in these approaches and the widely-used IR evaluation measures are analyzed, and the performance of these approaches on the LETOR benchmark datasets is evaluated.

Learning to rank with extremely randomized trees

The results show that ensembles of randomized trees are quite competitive for the "learning to rank" problem and computing times of the algorithms are analyzed.

Maxout Networks

A simple new model called maxout is defined designed to both facilitate optimization by dropout and improve the accuracy of dropout's fast approximate model averaging technique.

Extremely randomized trees

A new tree-based ensemble method for supervised classification and regression problems that consists of randomizing strongly both attribute and cut-point choice while splitting a tree node and builds totally randomized trees whose structures are independent of the output values of the learning sample.

Factorization Machines with libFM

Factorization approaches provide high accuracy in several important prediction problems, for example, recommender systems. However, applying factorization approaches to a new prediction problem is a

Greedy function approximation: A gradient boosting machine.

A general gradient descent boosting paradigm is developed for additive expansions based on any fitting criterion, and specific algorithms are presented for least-squares, least absolute deviation, and Huber-M loss functions for regression, and multiclass logistic likelihood for classification.