• Corpus ID: 204823877

Self-Attentive Document Interaction Networks for Permutation Equivariant Ranking

@article{Pasumarthi2019SelfAttentiveDI,
  title={Self-Attentive Document Interaction Networks for Permutation Equivariant Ranking},
  author={Rama Kumar Pasumarthi and Xuanhui Wang and Michael Bendersky and Marc Najork},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.09676}
}
How to leverage cross-document interactions to improve ranking performance is an important topic in information retrieval (IR) research. However, this topic has not been well-studied in the learning-to-rank setting and most of the existing work still treats each document independently while scoring. The recent development of deep learning shows strength in modeling complex relationships across sequences and sets. It thus motivates us to study how to leverage cross-document interactions for… 

Figures and Tables from this paper

SetRank: Learning a Permutation-Invariant Ranking Model for Information Retrieval

A neural learning-to-rank model called SetRank is proposed which directly learns a permutation-invariant ranking model defined on document sets of any size and Experimental results showed that the SetRank significantly outperformed the baselines include the traditional learning- to-rank models and state-of-the-art Neural IR models.

Groupwise Query Performance Prediction with BERT

An end-to-end BERT-based groupwise QPP model is proposed, which employs a groupwise predictor to jointly learn from multiple queries and documents, by incorporating both cross-query and cross-document information.

Fast Attention-based Learning-To-Rank Model for Structured Map Search

This work proposes a novel deep neural network LTR architecture, capable of seamlessly handling heterogeneous inputs, similar to GBDT-based methods, and is a low-cost alternative suitable to power ranking in industrial map search engines across a variety of languages and markets.

Diversifying Search Results using Self-Attention Network

Experimental results show that the proposed framework outperforms existing methods, and confirm the effectiveness of modeling all the candidate documents for the overall novelty and subtopic coverage globally, instead of comparing every single candidate document with the selected sequence document selection.

An Alternative Cross Entropy Loss for Learning-to-Rank

This work proposes a cross entropy-based learning-to-rank loss function that is theoretically sound, is a convex bound on NDCG—a popular ranking metric—and is consistent with N DCG under learning scenarios common in information retrieval.

CODER: An efficient framework for improving retrieval through COntextualized Document Embedding Reranking

Evaluating CODER in a large set of experiments on the MS MARCO and TripClick collections, it is shown that the contextual reranking of precomputed document embeddings leads to aSignificant improvement in retrieval performance.

Co-BERT: A Context-Aware BERT Retrieval Model Incorporating Local and Query-specific Context

An end-to-end transformer-based ranking model, named Co-BERT, has been proposed to exploit several BERT architectures to calibrate the query-document representations using pseudo relevance feedback before modeling the relevance of a group of documents jointly.

Incorporating Ranking Context for End-to-End BERT Re-ranking

An end-to-end BERT-based ranking model has been proposed to incorporate the ranking context by modeling the interactions between a query and multiple documents in the same ranking jointly, using the pseudo relevance feedback to adjust the relevance weightings.

Unbiased Learning to Rank

Eight state-of-the-art ULTR algorithms are evaluated and it is shown that many of them can be used in both offline settings and online environments with or without minor modifications.

Unbiased Learning to Rank: Online or Offline?

Six state-of-the-art ULTR algorithms are evaluated and it is shown that most of them can be used in both offline settings and online environments with or without minor modifications.

References

SHOWING 1-10 OF 51 REFERENCES

Learning a Deep Listwise Context Model for Ranking Refinement

This work proposes to use the inherent feature distributions of the top results to learn a Deep Listwise Context Model that helps to fine tune the initial ranked list and can significantly improve the state-of-the-art learning to rank methods on benchmark retrieval corpora.

Learning Groupwise Multivariate Scoring Functions Using Deep Neural Networks

This work proposes a new framework for multivariate scoring functions, in which the relevance score of a document is determined jointly by multiple documents in the list, and refers to this framework as GSFs---groupwise scoring functions.

Neural Ranking Models with Weak Supervision

This paper proposes to train a neural ranking model using weak supervision, where labels are obtained automatically without human annotators or any external resources, and suggests that supervised neural ranking models can greatly benefit from pre-training on large amounts of weakly labeled data that can be easily obtained from unsupervised IR models.

An Analysis of the Softmax Cross Entropy Loss for Learning-to-Rank with Binary Relevance

An analytical connection is established between ListNet's loss and two popular ranking metrics in a learning-to-rank setup with binary relevance labels and it is shown that the loss bounds Mean Reciprocal Rank and Normalized Discounted Cumulative Gain.

Neural Attention for Learning to Rank Questions in Community Question Answering

This paper applies Long Short-Term Memory networks with an attention mechanism, which can select important parts of text for the task of similar question retrieval from community Question Answering (cQA) forums, and applies tree kernels to the filtered text representations, thus exploiting the implicit features of the subtree space for learning question reranking.

Seq2Slate: Re-ranking and Slate Optimization with RNNs

A sequence-to-sequence model for ranking called seq2slate, which predicts the next `best' item to place on the slate given the items already selected and allows complex dependencies between the items to be captured directly in a flexible and scalable way.

TF-Ranking: Scalable TensorFlow Library for Learning-to-Rank

This work introduces TensorFlow Ranking, the first open source library for solving large-scale ranking problems in a deep learning framework, which is highly configurable and provides easy-to-use APIs to support different scoring mechanisms, loss functions and evaluation metrics in the learning- to-rank setting.

Revisiting Approximate Metric Optimization in the Age of Deep Neural Networks

This study revisits the approximation framework originally proposed by Qin et al. in light of recent advances in neural networks and hopes to show that the ideas from that work are more relevant than ever and can lay the foundation of learning-to-rank research in the age of deep neural networks.

A Deep Look into Neural Ranking Models for Information Retrieval

Learning deep structured semantic models for web search using clickthrough data

A series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them are developed.
...