Corpus ID: 233297028

Question Decomposition with Dependency Graphs

  title={Question Decomposition with Dependency Graphs},
  author={Matan Hasson and Jonathan Berant},
QDMR is a meaning representation for complex questions, which decomposes questions into a sequence of atomic steps. While stateof-the-art QDMR parsers use the common sequence-to-sequence (seq2seq) approach, a QDMR structure fundamentally describes labeled relations between spans in the input question, and thus dependency-based approaches seem appropriate for this task. In this work, we present a QDMR parser that is based on dependency graphs (DGs), where nodes in the graph are words and edges… Expand
1 Citations
SPARQLing Database Queries from Intermediate Question Decompositions
To translate natural language questions into executable database queries, most approaches rely on a fully annotated training set. Annotating a large dataset with queries is difficult as it requiresExpand


Break It Down: A Question Understanding Benchmark
This work introduces a Question Decomposition Meaning Representation (QDMR) for questions, and demonstrates the utility of QDMR by showing that it can be used to improve open-domain question answering on the HotpotQA dataset, and can be deterministically converted to a pseudo-SQL formal language, which can alleviate annotation in semantic parsing applications. Expand
Simpler but More Accurate Semantic Dependency Parsing
The LSTM-based syntactic parser of Dozat and Manning (2017) is extended to train on and generate graph structures that aim to capture between-word relationships that are more closely related to the meaning of a sentence, using graph-structured representations. Expand
Compositional Semantic Parsing on Semi-Structured Tables
This paper proposes a logical-form driven parsing algorithm guided by strong typing constraints and shows that it obtains significant improvements over natural baselines and is made publicly available. Expand
The Web as a Knowledge-Base for Answering Complex Questions
This paper proposes to decompose complex questions into a sequence of simple questions, and compute the final answer from the sequence of answers, and empirically demonstrates that question decomposition improves performance from 20.8 precision@1 to 27.5 precision @1 on this new dataset. Expand
Multi-hop Reading Comprehension through Question Decomposition and Rescoring
A system that decomposes a compositional question into simpler sub-questions that can be answered by off-the-shelf single-hop RC models is proposed and a new global rescoring approach is introduced that considers each decomposition to select the best final answer, greatly improving overall performance. Expand
GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering
We introduce GQA, a new dataset for real-world visual reasoning and compositional question answering, seeking to address key shortcomings of previous VQA datasets. We have developed a strong andExpand
HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
It is shown that HotpotQA is challenging for the latest QA systems, and the supporting facts enable models to improve performance and make explainable predictions. Expand
HybridQA: A Dataset of Multi-Hop Question Answering over Tabular and Textual Data
HybridQA is presented, a new large-scale question-answering dataset that requires reasoning on heterogeneous information and can serve as a challenging benchmark to study question answering withheterogeneous information. Expand
Incorporating Copying Mechanism in Sequence-to-Sequence Learning
This paper incorporates copying into neural network-based Seq2Seq learning and proposes a new model called CopyNet with encoder-decoder structure which can nicely integrate the regular way of word generation in the decoder with the new copying mechanism which can choose sub-sequences in the input sequence and put them at proper places in the output sequence. Expand
Constructing Datasets for Multi-hop Reading Comprehension Across Documents
A novel task to encourage the development of models for text understanding across multiple documents and to investigate the limits of existing methods, in which a model learns to seek and combine evidence — effectively performing multihop, alias multi-step, inference. Expand