End-to-End Neural Ad-hoc Ranking with Kernel Pooling

@article{Xiong2017EndtoEndNA,
  title={End-to-End Neural Ad-hoc Ranking with Kernel Pooling},
  author={Chenyan Xiong and Zhuyun Dai and Jamie Callan and Zhiyuan Liu and Russell Power},
  journal={Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval},
  year={2017}
}
  • Chenyan Xiong, Zhuyun Dai, +2 authors R. Power
  • Published 2017
  • Computer Science
  • Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval
This paper proposes K-NRM, a kernel based neural model for document ranking. Given a query and a set of documents, K-NRM uses a translation matrix that models word-level similarities via word embeddings, a new kernel-pooling technique that uses kernels to extract multi-level soft match features, and a learning-to-rank layer that combines those features into the final ranking score. The whole model is trained end-to-end. The ranking layer learns desired feature patterns from the pairwise ranking… Expand
Convolutional Neural Networks for Soft-Matching N-Grams in Ad-hoc Search
TLDR
Conv-KNRM uses Convolutional Neural Networks to represent n-grams of various lengths and soft matches them in a unified embedding space and is utilized by the kernel pooling and learning-to-rank layers to generate the final ranking score. Expand
Convolutional Neural Networks for So-Matching N-Grams in Ad-hoc Search Zhuyun Dai
Œis paper presents Conv-KNRM, a Convolutional Kernel-based Neural Ranking Model that models n-gram so‰ matches for ad-hoc search. Instead of exact matching query and document n-grams, Conv-KNRM usesExpand
Soft Kernel-based Ranking on a Statistical Manifold
TLDR
This work proposes a kernel-based neural ranking model based on a statistical manifold that considers the interaction as geodesic on a manifold and proposes a smoothed kernel pooling scheme at different similarity levels based on Riemann normal distribution. Expand
TU Wien @ TREC Deep Learning '19 - Simple Contextualization for Re-ranking
TLDR
The TK (Transformer-Kernel) model is submitted: a neural re-ranking model for ad-hoc search using an efficient contextualization mechanism and a document-length enhanced kernel-pooling, which enables users to gain insight into the model. Expand
Consistency and Variation in Kernel Neural Ranking Model
TLDR
The consistency of the kernel-based neural ranking model K-NRM, a recent state-of-the-art neural IR model, is studied, which is important for reproducible research and deployment in the industry and enables a simple yet effective approach to construct ensemble rankers. Expand
A Hybrid Deep Model for Learning to Rank Data Tables
TLDR
This work uses a learning- to-rank approach to train a system to capture semantic and relevance signals within interactions between the structured form of candidate tables and query tokens, and proposes using row and column summaries to incorporate table content into a new neural model. Expand
Investigating Weak Supervision in Deep Ranking
TLDR
A cascade ranking framework is proposed to combine the two weakly supervised relevance, which significantly promotes the ranking performance of neural ranking models and outperforms the best result in the last NTCIR-13 The authors Want Web (WWW) task. Expand
Target-Oriented Transformation Networks for Document Retrieval
TLDR
A target-oriented transformation networks based neural ranking model TTRM is proposed, which encodes the target information into the document content and utilizing a context conserving transformation to encapsulate the contextualized features. Expand
An end-to-end pseudo relevance feedback framework for neural document retrieval
TLDR
An end-to-end neural PRF framework is proposed that enriches the representation of user information need from a single query to multiple PRF documents and shows that integrating the existing neural IR models within the NPRF framework results in reduced training and validation losses, and consequently, improved effectiveness of the learned ranking functions. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 24 REFERENCES
A Latent Semantic Model with Convolutional-Pooling Structure for Information Retrieval
TLDR
A new latent semantic model that incorporates a convolutional-pooling structure over word sequences to learn low-dimensional, semantic vector representations for search queries and Web documents is proposed. Expand
Learning semantic representations using convolutional neural networks for web search
TLDR
This paper presents a series of new latent semantic models based on a convolutional neural network to learn low-dimensional semantic vectors for search queries and Web documents that significantly outperforms other se-mantic models in retrieval performance. Expand
Learning deep structured semantic models for web search using clickthrough data
TLDR
A series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them are developed. Expand
Improving Document Ranking with Dual Word Embeddings
TLDR
This paper investigates the popular neural word embedding method Word2vec as a source of evidence in document ranking and proposes the proposed Dual Embedding Space Model (DESM), which provides evidence that a document is about a query term. Expand
Learning to Match using Local and Distributed Representations of Text for Web Search
TLDR
This work proposes a novel document ranking model composed of two separate deep neural networks, one that matches the query and the document using a local representation, and another that Matching with distributed representations complements matching with traditional local representations. Expand
A Deep Relevance Matching Model for Ad-hoc Retrieval
TLDR
A novel deep relevance matching model (DRMM) for ad-hoc retrieval that employs a joint deep architecture at the query term level for relevance matching and can significantly outperform some well-known retrieval models as well as state-of-the-art deep matching models. Expand
Integrating and Evaluating Neural Word Embeddings in Information Retrieval
TLDR
This paper uses neural word embeddings within the well known translation language model for information retrieval, which captures implicit semantic relations between the words in queries and those in relevant documents, thus producing more accurate estimations of document relevance. Expand
Semantic Matching by Non-Linear Word Transportation for Information Retrieval
TLDR
This work introduces a novel retrieval model by viewing the matching between queries and documents as a non-linear word transportation (NWT) problem, and defines the capacity and profit of a transportation model designed for the IR task. Expand
Clickthrough-based translation models for web search: from word models to phrase models
TLDR
This paper provides a quantitative analysis of the language discrepancy issue, and explores the use of clickthrough data to bridge documents and queries, and demonstrates that standard statistical machine translation techniques can be adapted for building a better Web document retrieval system. Expand
Query Expansion with Locally-Trained Word Embeddings
TLDR
It is demonstrated that word embeddings such as word2vec and GloVe, when trained globally, underperform corpus and query specific embeddlings for retrieval tasks, suggesting that other tasks benefiting from global embeddments may also benefit from local embeddins. Expand
...
1
2
3
...