QASC: A Dataset for Question Answering via Sentence Composition

@inproceedings{Khot2020QASCAD,
  title={QASC: A Dataset for Question Answering via Sentence Composition},
  author={Tushar Khot and Peter Clark and Michal Guerquin and Peter A. Jansen and Ashish Sabharwal},
  booktitle={AAAI},
  year={2020}
}
Composing knowledge from multiple pieces of texts is a key challenge in multi-hop question answering. We present a multi-hop reasoning dataset, Question Answering via Sentence Composition(QASC), that requires retrieving facts from a large corpus and composing them to answer a multiple-choice question. QASC is the first dataset to offer two desirable properties: (a) the facts to be composed are annotated in a large corpus, and (b) the decomposition into these facts is not evident from the… Expand
BiQuAD: Towards QA based on deeper text understanding
TLDR
This work introduces a new dataset called BiQuAD that requires deeper comprehension in order to answer questions in both extractive and deductive fashion and shows that state-of-the-art QA models do not perform well on the challenging long form contexts and reasoning requirements posed by the dataset. Expand
Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop Question Answering
TLDR
This work introduces a simple, fast, and unsupervised iterative evidence retrieval method that outperforms all the previous methods on the evidence selection task on two datasets: MultiRC and QASC. Expand
Learning to Explain: Datasets and Models for Identifying Valid Reasoning Chains in Multihop Question-Answering
TLDR
A delexicalized chain representation in which repeated noun phrases are replaced by variables, thus turning them into generalized reasoning chains is explored, finding that generalized chains maintain performance while also being more robust to certain perturbations. Expand
Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies
TLDR
This work introduces StrategyQA, a question answering benchmark where the required reasoning steps are implicit in the question, and should be inferred using a strategy, and proposes a data collection procedure that combines term-based priming to inspire annotators, careful control over the annotator population, and adversarial filtering for eliminating reasoning shortcuts. Expand
WorldTree V2: A Corpus of Science-Domain Structured Explanations and Inference Patterns supporting Multi-Hop Inference
TLDR
This work presents the second iteration of the WorldTree project, a corpus of 5,114 standardized science exam questions paired with large detailed multi-fact explanations that combine core scientific knowledge and world knowledge, and uses this explanation corpus to author a set of 344 high-level science domain inference patterns similar to semantic frames supporting multi-hop inference. Expand
Generating Followup Questions for Interpretable Multi-hop Question Answering
We propose a framework for answering open domain multi-hop questions in which partial information is read and used to generate followup questions, to finally be answered by a pretrained single-hopExpand
Memory Augmented Sequential Paragraph Retrieval for Multi-hop Question Answering
TLDR
This paper proposes a new architecture that models paragraphs as sequential data and considers multi-hop information retrieval as a kind of sequence labeling task and designs a rewritable external memory to model the dependency among paragraphs. Expand
Explaining Question Answering Models through Text Generation
TLDR
A model for multi-choice question answering, where a LM-based generator generates a textual hypothesis that is later used by a classifier to answer the question, and produces hypotheses that elucidate the knowledge used by the LM for answering the question. Expand
Exploiting Reasoning Chains for Multi-hop Science Question Answering
  • Weiwen Xu, Yang Deng, Huihui Zhang, Deng Cai, W. Lam
  • Computer Science
  • ArXiv
  • 2021
We propose a novel Chain Guided Retrieverreader (CGR) framework to model the reasoning chain for multi-hop Science Question Answering. Our framework is capable of performing explainable reasoningExpand
Dynamic Semantic Graph Construction and Reasoning for Explainable Multi-hop Science Question Answering
TLDR
Results on two scientific multi-hop QA datasets show that this framework can surpass recent approaches including those using additional knowledge graphs while maintaining high explainability on OpenBookQA and achieve a new state-ofthe-art result on ARC-Challenge in a computationally practicable setting. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 39 REFERENCES
HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
TLDR
It is shown that HotpotQA is challenging for the latest QA systems, and the supporting facts enable models to improve performance and make explainable predictions. Expand
Question Answering as Global Reasoning Over Semantic Abstractions
TLDR
This work presents the first system that reasons over a wide range of semantic abstractions of the text, which are derived using off-the-shelf, general-purpose, pre-trained natural language modules such as semantic role labelers, coreference resolvers, and dependency parsers. Expand
Improving Question Answering with External Knowledge
TLDR
This work explores simple yet effective methods for exploiting two sources of externalknowledge for exploiting unstructured external knowledge for subject-area QA on multiple-choice question answering tasks in subject areas such as science. Expand
The Web as a Knowledge-Base for Answering Complex Questions
TLDR
This paper proposes to decompose complex questions into a sequence of simple questions, and compute the final answer from the sequence of answers, and empirically demonstrates that question decomposition improves performance from 20.8 precision@1 to 27.5 precision @1 on this new dataset. Expand
Multi-hop Inference for Sentence-level TextGraphs: How Challenging is Meaningfully Combining Information for Science Question Answering?
TLDR
This work empirically characterize the difficulty of building or traversing a graph of sentences connected by lexical overlap, by evaluating chance sentence aggregation quality through 9,784 manually-annotated judgements across knowledge graphs built from three free-text corpora. Expand
What's Missing: A Knowledge Gap Guided Approach for Multi-hop Question Answering
TLDR
A novel approach is developed that explicitly identifies the knowledge gap between a key span in the provided knowledge and the answer choices and learns to fill this gap by determining the relationship between the span and an answer choice, based on retrieved knowledge targeting this gap. Expand
Question Answering via Integer Programming over Semi-Structured Knowledge
TLDR
This work proposes a structured inference system for this task, formulated as an Integer Linear Program (ILP), that answers natural language questions using a semi-structured knowledge base derived from text, including questions requiring multi-step inference and a combination of multiple facts. Expand
Answering Complex Questions Using Open Information Extraction
TLDR
This work develops a new inference model for Open IE that can work effectively with multiple short facts, noise, and the relational structure of tuples, and significantly outperforms a state-of-the-art structured solver on complex questions of varying difficulty. Expand
Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge
TLDR
A new question set, text corpus, and baselines assembled to encourage AI research in advanced question answering constitute the AI2 Reasoning Challenge (ARC), which requires far more powerful knowledge and reasoning than previous challenges such as SQuAD or SNLI. Expand
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
TLDR
This work argues for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering, and classify these tasks into skill sets so that researchers can identify (and then rectify) the failings of their systems. Expand
...
1
2
3
4
...