• Corpus ID: 235624155

Leveraging semantically similar queries for ranking via combining representations

@article{Helm2021LeveragingSS,
  title={Leveraging semantically similar queries for ranking via combining representations},
  author={Hayden S. Helm and Marah Abdin and Benjamin D. Pedigo and Shweti Mahajan and Vince Lyzinski and Youngser Park and Amitabh Basu and Piali Choudhury and Christopher M. White and Weiwei Yang and Carey E. Priebe},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.12621}
}
In modern ranking problems, different and disparate representations of the items to be ranked are often available. It is sensible, then, to try to combine these representations to improve ranking. Indeed, learning to rank via combining representations is both principled and practical for learning a ranking function for a particular query. In extremely data-scarce settings, however, the amount of labeled data available for a particular query can lead to a highly variable and ineffective ranking… 

Figures from this paper

References

SHOWING 1-10 OF 38 REFERENCES

Learning to rank via combining representations

Learning to rank -- producing a ranked list of items specific to a query and with respect to a set of supervisory items -- is a problem of general interest. The setting we consider is one in which no

Learning to rank for information retrieval

TLDR
Three major approaches to learning to rank are introduced, i.e., the pointwise, pairwise, and listwise approaches, the relationship between the loss functions used in these approaches and the widely-used IR evaluation measures are analyzed, and the performance of these approaches on the LETOR benchmark datasets is evaluated.

LETOR: A benchmark collection for research on learning to rank for information retrieval

TLDR
The details of the LETOR collection are described and it is shown how it can be used in different kinds of researches, and several state-of-the-art learning to rank algorithms on LETOR are compared.

Inductive Representation Learning on Large Graphs

TLDR
GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

Yahoo! Learning to Rank Challenge Overview

TLDR
This paper provides an overview and an analysis of this challenge, along with a detailed description of the released datasets, used internally at Yahoo! for learning the web search ranking function.

VERSE: Versatile Graph Embeddings from Similarity Measures

TLDR
VERtex Similarity Embeddings (VERSE), a simple, versatile, and memory-efficient method that derives graph embeddings explicitly calibrated to preserve the distributions of a selected vertex-to-vertex similarity measure, is proposed.

Bayesian Vertex Nomination Using Content and Context

TLDR
This work adopts a new approach to formulate a Bayesian model for the vertex nomination problem and aims to construct a ‘nomination list’ where entities that are truly interesting are concentrated at the top of the list.

node2vec: Scalable Feature Learning for Networks

TLDR
In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.

Vertex nomination

TLDR
This advanced review examines the relevant literature, particularly focused on the importance and inclusion of edge‐ and vertex attributes, used in conjunction with the graph structure.

A Survey on Transfer Learning

TLDR
The relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift are discussed.