• Corpus ID: 245218563

CONQRR: Conversational Query Rewriting for Retrieval with Reinforcement Learning

@article{Wu2021CONQRRCQ,
  title={CONQRR: Conversational Query Rewriting for Retrieval with Reinforcement Learning},
  author={Zeqiu Wu and Yi Luan and Hannah Rashkin and D. Reitter and Gaurav Singh Tomar},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.08558}
}
Compared to standard retrieval tasks, passage 001 retrieval for conversational question answering 002 (CQA) poses new challenges in understand- 003 ing the current user question, as each ques- 004 tion needs to be interpreted within the dia- 005 logue context. Moreover, it can be expensive 006 to re-train well-established retrievers such as 007 search engines that are originally developed 008 for non-conversational queries. To facilitate 009 their use, we develop a query rewriting model 010 C… 

Saving Dense Retriever from Shortcut Dependency in Conversational Search

The existence of a retrieval shortcut in CS is demonstrated, which causes models to retrieve passages solely relying on partial history while disregarding the latest question, and iterative hard negatives mined by pre-trained dense retrievers are explored.

Dialog Inpainting: Turning Documents into Dialogs

dial inpainting takes the text of any document and transforms it into a two- person dialog between the writer and an imagined reader, and uses a dialog inpainter to predict what the imagined reader asked or said in between each of the writer's utterances.

References

SHOWING 1-10 OF 50 REFERENCES

Contextualized Query Embeddings for Conversational Search

A compact and effective model for low-latency passage retrieval in Conversational search based on learned dense representations that effectively rewrites conversational queries as dense representations in conversational search and open-domain question answering datasets is described.

Open-Domain Question Answering Goes Conversational via Question Rewriting

A strong baseline approach is introduced that combines the state-of-the-art model for question rewriting, and competitive models for open-domain QA, and the effectiveness of this approach is reported.

Few-Shot Generative Conversational Query Rewriting

This paper develops two methods, based on rules and self-supervised learning, to generate weak supervision data using large amounts of ad hoc search sessions, and to fine-tune GPT-2 to rewrite conversational queries.

Open-Retrieval Conversational Question Answering

This work builds an end-to-end system for ORConvQA, featuring a retriever, a reranker, and a reader that are all based on Transformers, and demonstrates that a learnable retriever is crucial for OR conversational search.

Saving Dense Retriever from Shortcut Dependency in Conversational Search

The existence of a retrieval shortcut in CS is demonstrated, which causes models to retrieve passages solely relying on partial history while disregarding the latest question, and iterative hard negatives mined by pre-trained dense retrievers are explored.

Question Rewriting for Conversational Question Answering

A conversational QA architecture is introduced that sets the new state of the art on the TREC CAsT 2019 passage retrieval dataset and the same QR model improves QA performance on the QuAC dataset with respect to answer span extraction, which is the next step in QA after passage retrieval.

Making Information Seeking Easier: An Improved Pipeline for Conversational Search

The proposed combination achieves a relative performance improvement of 14.8% over the state-of-the-art baseline and is also able to surpass the Oracle.

Ask the Right Questions: Active Question Reformulation with Reinforcement Learning

This work proposes an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers, and finds that successful question reformulations look quite different from natural language paraphrases.

QuAC: Question Answering in Context

QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context, as it shows in a detailed qualitative evaluation.

Conversational Question Reformulation via Sequence-to-Sequence Architectures and Pretrained Language Models

Examining a variety of architectures with different numbers of parameters, it is demonstrated that the recent text-to-text transfer transformer (T5) achieves the best results both on CANARD and CAsT with fewer parameters, compared to similar transformer architectures.