Simulating and Modeling the Risk of Conversational Search

@article{Wang2022SimulatingAM,
  title={Simulating and Modeling the Risk of Conversational Search},
  author={Zhenduo Wang and Qingyao Ai},
  journal={ACM Transactions on Information Systems (TOIS)},
  year={2022},
  volume={40},
  pages={1 - 33}
}
  • Zhenduo Wang, Qingyao Ai
  • Published 1 January 2022
  • Computer Science
  • ACM Transactions on Information Systems (TOIS)
In conversational search, agents can interact with users by asking clarifying questions to increase their chance of finding better results. Many recent works and shared tasks in both natural language processing and information retrieval communities have focused on identifying the need to ask clarifying questions and methodologies of generating them. These works assume that asking a clarifying question is a safe alternative to retrieving results. As existing conversational search models are far… 

References

SHOWING 1-10 OF 45 REFERENCES

Controlling the Risk of Conversational Search via Reinforcement Learning

TLDR
A risk-aware conversational search agent model to balance the risk of answering user’s query and asking clarifying questions is proposed and is able to significantly outperform strong non-risk-aware baselines.

Towards Conversational Search and Recommendation: System Ask, User Respond

TLDR
This paper proposes a System Ask -- User Respond (SAUR) paradigm for conversational search, defines the major components of the paradigm, and designs a unified implementation of the framework for product search and recommendation in e-commerce.

Leading Conversational Search by Suggesting Useful Questions

TLDR
A novel evaluation metric, usefulness, is established, which goes beyond relevance and measures whether the suggestions provide valuable information for the next step of a user’s journey, and a public benchmark for useful question suggestion is constructed.

Asking Clarifying Questions in Open-Domain Information-Seeking Conversations

TLDR
This paper formulate the task of asking clarifying questions in open-domain information-seeking conversational systems, propose an offline evaluation methodology for the task, and collect a dataset, called Qulac, through crowdsourcing, which significantly outperforms competitive baselines.

Open-Retrieval Conversational Question Answering

TLDR
This work builds an end-to-end system for ORConvQA, featuring a retriever, a reranker, and a reader that are all based on Transformers, and demonstrates that a learnable retriever is crucial for OR conversational search.

Generating Clarifying Questions for Information Retrieval

TLDR
A taxonomy of clarification for open-domain search queries is identified by analyzing large-scale query reformulation data sampled from Bing search logs, and supervised and reinforcement learning models for generating clarifying questions learned from weak supervision data are proposed.

Analyzing and Learning from User Interactions for Search Clarification

TLDR
This paper conducts a comprehensive study by analyzing large-scale user interactions with clarifying questions in a major web search engine and proposes a model for learning representation for clarifying Questions based on the user interaction data as implicit feedback.

Topic Propagation in Conversational Search

TLDR
This work adopts the 2019 TREC Conversational Assistant Track (CAsT) framework to experiment with a modular architecture performing topic-aware utterance rewriting, retrieval of candidate passages for the rewritten utterances, and neural-based re-ranking of candidate passage.

What Do You Mean Exactly?: Analyzing Clarification Questions in CQA

TLDR
The dialogues between the users on a community question answering (CQA) website is explored as a rich repository of information-seeking interactions and the problem of predicting the specific subject of a clarification question is explored, a first step towards automatic generation of clarification questions.

Conversational Product Search Based on Negative Feedback

TLDR
A conversational paradigm for product search driven by non-relevant items, based on which fine-grained feedback is collected and utilized to show better results in the next iteration is proposed and Experimental results show that the model is significantly better than state-of-the-art product search baselines without using feedback and those baselines using item-level negative feedback.