Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking

@article{Baumgrtner2022IncorporatingRF,
  title={Incorporating Relevance Feedback for Information-Seeking Retrieval using Few-Shot Document Re-Ranking},
  author={Tim Baumg{\"a}rtner and Leonardo F. R. Ribeiro and Nils Reimers and Iryna Gurevych},
  journal={ArXiv},
  year={2022},
  volume={abs/2210.10695}
}
Pairing a lexical retriever with a neural re-ranking model has set state-of-the-art performance on large-scale information retrieval datasets. This pipeline covers scenarios like question answering or navigational queries, however, for information-seeking scenarios, users often provide information on whether a document is relevant to their query in form of clicks or explicit feedback. Therefore, in this work, we explore how relevance feedback can be directly integrated into neural re-ranking… 

Optimizing Test-Time Query Representations for Dense Retrieval

T OU R is introduced, which further optimizes instance-level query representations guided by signals from test-time retrieval results and improves the end-to-end open-domain QA accuracy significantly, as well as passage retrieval performance.

References

SHOWING 1-10 OF 54 REFERENCES

Improving Query Representations for Dense Retrieval with Pseudo Relevance Feedback

ANCE-PRF, a new query encoder that uses pseudo relevance feedback (PRF) to improve query representations for dense retrieval, significantly outperforms ANCE and other recent dense retrieval systems on several datasets.

Iterative Relevance Feedback for Answer Passage Retrieval with Passage-level Semantic Match

It is shown that iterative feedback is more effective than the top-k approach for answer retrieval and that it can produce significant improvements compared to both word-based iterative Feedback models and those based on term-level semantic similarity.

Learning a Deep Listwise Context Model for Ranking Refinement

This work proposes to use the inherent feature distributions of the top results to learn a Deep Listwise Context Model that helps to fine tune the initial ranked list and can significantly improve the state-of-the-art learning to rank methods on benchmark retrieval corpora.

NPRF: A Neural Pseudo Relevance Feedback Framework for Ad-hoc Information Retrieval

This work proposes an end-to-end neural PRF framework that can be used with existing neural IR models by embedding different neural models as building blocks and confirms the effectiveness of the proposed NPRF framework in improving the performance of two state-of-the-art neuralIR models.

BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models

This work extensively analyzes different retrieval models and provides several suggestions that it believes may be useful for future work, finding that performing well consistently across all datasets is challenging.

Optimizing search engines using clickthrough data

The goal of this paper is to develop a method that utilizes clickthrough data for training, namely the query-log of the search engine in connection with the log of links the users clicked on in the presented ranking.

ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT

ColBERT is presented, a novel ranking model that adapts deep LMs (in particular, BERT) for efficient retrieval that is competitive with existing BERT-based models (and outperforms every non-BERT baseline) and enables leveraging vector-similarity indexes for end-to-end retrieval directly from millions of documents.

BERT-QE: Contextualized Query Expansion for Document Re-ranking

A novel query expansion model that leverages the strength of the BERT model to select relevant document chunks for expansion is proposed, which significantly outperforms BERT-Large models.

Efficiently Teaching an Effective Dense Retriever with Balanced Topic Aware Sampling

This work introduces an efficient topic-aware query and balanced margin sampling technique, called TAS-Balanced, and produces the first dense retriever that outperforms every other method on recall at any cutoff on TREC-DL and allows more resource intensive re-ranking models to operate on fewer passages to improve results further.

An incremental approach to efficient pseudo-relevance feedback

Experimental results on TREC Terabyte collections show that the proposed incremental approach can improve the efficiency of pseudo-relevance feedback methods by a factor of two without sacrificing their effectiveness.
...