A Wrong Answer or a Wrong Question? An Intricate Relationship between Question Reformulation and Answer Selection in Conversational Question Answering

@article{Vakulenko2020AWA,
  title={A Wrong Answer or a Wrong Question? An Intricate Relationship between Question Reformulation and Answer Selection in Conversational Question Answering},
  author={Svitlana Vakulenko and S. Longpre and Zhucheng Tu and R. Anantha},
  journal={ArXiv},
  year={2020},
  volume={abs/2010.06835}
}
The dependency between an adequate question formulation and correct answer selection is a very intriguing but still underexplored area. In this paper, we show that question rewriting (QR) of the conversational context allows to shed more light on this phenomenon and also use it to evaluate robustness of different answer selection approaches. We introduce a simple framework that enables an automated analysis of the conversational question answering (QA) performance using question rewrites, and… Expand

Figures and Tables from this paper

Question Rewriting for Open-Domain Conversational QA: Best Practices and Limitations
TLDR
While conversation history modeling with dense representations outperforms QR, it is shown the advantages to apply both jointly, as QR boosts the performance especially when limited history turns are considered. Expand
Leveraging Query Resolution and Reading Comprehension for Conversational Passage Retrieval
TLDR
This paper describes the participation of UvA.ILPS group at the TREC CAsT 2020 track and uses QuReTeC, a binary term classification query resolution model, to address conversational passage retrieval challenges. Expand
Question Rewriting for Conversational Question Answering
TLDR
A conversational QA architecture is introduced that sets the new state of the art on the TREC CAsT 2019 passage retrieval dataset and the same QR model improves QA performance on the QuAC dataset with respect to answer span extraction, which is the next step in QA after passage retrieval. Expand

References

SHOWING 1-10 OF 30 REFERENCES
QuAC: Question Answering in Context
TLDR
QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context, as it shows in a detailed qualitative evaluation. Expand
Can You Unpack That? Learning to Rewrite Questions-in-Context
TLDR
This work constructs, CANARD, a dataset of 40,527 questions based on QuAC and trains Seq2Seq models for incorporating context into standalone questions and introduces the task of question-in-context rewriting. Expand
CoQA: A Conversational Question Answering Challenge
TLDR
CoQA is introduced, a novel dataset for building Conversational Question Answering systems and it is shown that conversational questions have challenging phenomena not present in existing reading comprehension datasets (e.g., coreference and pragmatic reasoning). Expand
Query Reformulation using Query History for Passage Retrieval in Conversational Search
TLDR
This work proposes two simple yet effective query reformulation approaches: historical query expansion (HQE) and neural transfer reformulation (NTR), and shows that fusing their output reduces the performance gap between the manually-rewritten and automatically-generated queries to 4 from 22 points when compared with the best CAsT submission. Expand
Few-Shot Generative Conversational Query Rewriting
TLDR
This paper develops two methods, based on rules and self-supervised learning, to generate weak supervision data using large amounts of ad hoc search sessions, and to fine-tune GPT-2 to rewrite conversational queries. Expand
Adversarial Examples for Evaluating Reading Comprehension Systems
TLDR
This work proposes an adversarial evaluation scheme for the Stanford Question Answering Dataset that tests whether systems can answer questions about paragraphs that contain adversarially inserted sentences without changing the correct answer or misleading humans. Expand
MS MARCO: A Human Generated MAchine Reading COmprehension Dataset
TLDR
This new dataset is aimed to overcome a number of well-known weaknesses of previous publicly available datasets for the same task of reading comprehension and question answering, and is the most comprehensive real-world dataset of its kind in both quantity and quality. Expand
Conversational Query Understanding Using Sequence to Sequence Modeling
TLDR
A large scale open domain dataset of conversational queries and various sequence to sequence models that are learned from this dataset are presented, showing the potential of sequence tosequence modeling for this task. Expand
Language Models are Unsupervised Multitask Learners
TLDR
It is demonstrated that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText, suggesting a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations. Expand
Learning to Rewrite Queries
TLDR
A learning to rewrite framework that consists of a candidate generating phase and a candidate ranking phase that allows for the flexibility to reuse most of existing query rewriters and to explicitly optimize search relevance is proposed. Expand
...
1
2
3
...