CoQA: A Conversational Question Answering Challenge

@article{Reddy2019CoQAAC,
  title={CoQA: A Conversational Question Answering Challenge},
  author={Siva Reddy and Danqi Chen and Christopher D. Manning},
  journal={Transactions of the Association for Computational Linguistics},
  year={2019},
  volume={7},
  pages={249-266}
}
Humans gather information through conversations involving a series of interconnected questions and answers. [] Key Result The best system obtains an F1 score of 65.4%, which is 23.4 points behind human performance (88.8%), indicating that there is ample room for improvement. We present CoQA as a challenge to the community at https://stanfordnlp.github.io/coqa.

CoQAR: Question Rewriting on CoQA

CoQAR, a corpus containing 4.5K conversations from the Conversational Question-Answering dataset CoQA, is proposed, for a total of 53K follow-up question-answer pairs, and the results support the idea that question rewriting can be used as a preprocessing step for question answering models, thereby increasing their performances.

TopiOCQA: Open-domain Conversational Question Answering with Topic Switching

TopiOCQA is introduced, an open-domain conversational dataset with topic switches based on Wikipedia that poses a challenging test-bed for models, where efficient retrieval is required on multiple turns of the same conversation, in conjunction with constructing valid responses using conversational history.

QAConv: Question Answering on Informative Conversations

A new question answering (QA) dataset that uses conversations as a knowledge source, focusing on informative conversations, including business emails, panel discussions, and work channels, that provides a new training and evaluation testbed to facilitate QA on conversations research.

Abg-CoQA: Clarifying Ambiguity in Conversational Question Answering

Abg-CoQA, a novel dataset for clarifying ambiguity in Conversational Question Answering systems, is introduced and strong language generation models and conversational question answering models on Abg- coQA are evaluated.

BERT-CoQAC: BERT-Based Conversational Question Answering in Context

This paper introduces a framework based on publicly available pre-trained language model called BERT for incorporating history turns into the system and proposes a history selection mechanism that selects the turns that are relevant and contributes the most to answer the current question.

Open-Domain Question Answering Goes Conversational via Question Rewriting

A strong baseline approach is introduced that combines the state-of-the-art model for question rewriting, and competitive models for open-domain QA, and the effectiveness of this approach is reported.

FriendsQA: Open-Domain Question Answering on TV Show Transcripts

FriendsQA, a challenging question answering dataset that contains 1,222 dialogues and 10,610 open-domain questions, to tackle machine comprehension on everyday conversations, has a great potential of elevating QA research on multiparty dialogue to another level.

Conversational Question Answering Using a Shift of Context

A conversational speech interface for QA, where users can pose questions in both text and speech to query DBpedia entities and converse in form of a natural dialog by asking follow-up questions.

An Empirical Study of Content Understanding in Conversational Question Answering

The experimental results indicate some potential hazards in the benchmark datasets, QuAC and CoQA, for conversational comprehension research, and sheds light on both what models may learn and how datasets may bias the models.

Conversational QA for FAQs

The dataset and experiments show that it is possible to access domain specific FAQs with high quality using conversational QA systems with little training data, thanks to transfer learning, and results of state-of-the-art models including transfer learning from Wikipedia QA datasets to the authors' cooking FAQ dataset.
...

References

SHOWING 1-10 OF 62 REFERENCES

QuAC: Question Answering in Context

QuAC introduces challenges not found in existing machine comprehension datasets: its questions are often more open-ended, unanswerable, or only meaningful within the dialog context, as it shows in a detailed qualitative evaluation.

Beyond Question-Answering.

Abstract : We demonstrate, using protocols of actual interactions with a question-answering system, that users of these systems expect to engage in a conversation whose coherence is manifested in the

FlowQA: Grasping Flow in History for Conversational Machine Comprehension

By reducing sequential instruction understanding to conversational machine comprehension, FlowQA outperforms the best models on all three domains in SCONE, with +1.8% to +4.4% improvement in accuracy.

SDNet: Contextualized Attention-based Deep Network for Conversational Question Answering

An innovated contextualized attention-based deep neural network, SDNet, to fuse context into traditional MRC models, which leverages both inter-attention and self-att attention to comprehend conversation context and extract relevant information from passage.

Looking Beyond the Surface: A Challenge Set for Reading Comprehension over Multiple Sentences

The dataset is the first to study multi-sentence inference at scale, with an open-ended set of question types that requires reasoning skills, and finds human solvers to achieve an F1-score of 88.1%.

Complex Sequential Question Answering: Towards Learning to Converse Over Linked Question Answer Pairs with a Knowledge Graph

The task of Complex Sequential QA is introduced which combines the two tasks of answering factual questions through complex inferencing over a realistic-sized KG of millions of entities, and learning to converse through a series of coherently linked QA pairs.

Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks

This work argues for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering, and classify these tasks into skill sets so that researchers can identify (and then rectify) the failings of their systems.

The NarrativeQA Reading Comprehension Challenge

A new dataset and set of tasks in which the reader must answer questions about stories by reading entire books or movie scripts are presented, designed so that successfully answering their questions requires understanding the underlying narrative rather than relying on shallow pattern matching or salience.

Search-based Neural Structured Learning for Sequential Question Answering

This work proposes a novel dynamic neural semantic parsing framework trained using a weakly supervised reward-guided search that effectively leverages the sequential context to outperform state-of-the-art QA systems that are designed to answer highly complex questions.

TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension

It is shown that, in comparison to other recently introduced large-scale datasets, TriviaQA has relatively complex, compositional questions, has considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and requires more cross sentence reasoning to find answers.
...