Corpus ID: 209202200

Neural Module Networks for Reasoning over Text

@article{Gupta2020NeuralMN,
  title={Neural Module Networks for Reasoning over Text},
  author={Nitish Gupta and Kevin Lin and Dan Roth and Sameer Singh and Matt Gardner},
  journal={ArXiv},
  year={2020},
  volume={abs/1912.04971}
}
Answering compositional questions that require multiple steps of reasoning against text is challenging, especially when they involve discrete, symbolic operations. Neural module networks (NMNs) learn to parse such questions as executable programs composed of learnable modules, performing well on synthetic visual QA domains. However, we find that it is challenging to learn these models for non-synthetic questions on open-domain text, where a model needs to deal with the diversity of natural… Expand
Weakly Supervised Neuro-Symbolic Module Networks for Numerical Reasoning
TLDR
WNSMN is proposed, a Weakly-Supervised Neuro-Symbolic Module Network trained with answers as the sole supervision for numerical reasoning based MRC that outperforms NMN by 32% and the reasoning-free language model GenBERT by 8% in exact match accuracy when trained under comparable weak supervised settings. Expand
Turning Tables: Generating Examples from Semi-structured Tables for Endowing Language Models with Reasoning Skills
TLDR
This work proposes to leverage semi-structured tables, and automatically generate at scale questionparagraph pairs, where answering the question requires reasoning over multiple facts in the paragraph, and adds a pre-training step over this synthetic data, which includes examples that require 16 different reasoning skills. Expand
Improving Numerical Reasoning Skills in the Modular Approach for Complex Question Answering on Text
TLDR
This work proposes effective techniques to improve NMNs’ numerical reasoning capabilities by making the interpreter questionaware and capturing the relationship between entities and numbers. Expand
Database Reasoning Over Text
TLDR
This work proposes a modular architecture to answer database-style queries over multiple spans from text and aggregating these at scale, which scales to databases containing thousands of facts whereas contemporary models are limited by how many facts can be encoded. Expand
Understanding Unnatural Questions Improves Reasoning over Text
TLDR
This paper addresses the challenge of learning a high-quality programmer (parser) by projecting natural human-generated questions into unnatural machine-generated Questions which are more convenient to parse by learning a semantic parser that associates synthetic questions with their corresponding action sequences. Expand
Discrete Reasoning Templates for Natural Language Understanding
TLDR
This paper presents an approach that reasons about complex questions by decomposing them to simpler subquestions that can take advantage of single-span extraction reading-comprehension models, and derives the final answer according to instructions in a predefined reasoning template. Expand
Multi-Step Inference for Reasoning over Paragraphs
TLDR
This work presents a compositional model reminiscent of neural module networks that can perform chained logical reasoning, which first finds relevant sentences in the context and then chains them together using neural modules. Expand
KQA Pro: A Large Diagnostic Dataset for Complex Question Answering over Knowledge Base
TLDR
This work introduces KQA Pro, a large-scale dataset for Complex KBQA, and generates questions, SPARQLs, and functional programs with recursive templates and paraphrase the questions by crowdsourcing, giving rise to around 120K diverse instances. Expand
Human Explanation-based Learning for Machine Comprehension
Human annotators usually provide only the final labels in dataset collection process. Their rich knowledge and deductive power behind labeling decisions are not explicitly revealed in the dataset,Expand
Learning from Task Descriptions
TLDR
This work introduces a framework for developing NLP systems that solve new tasks after reading their descriptions, synthesizing prior work in this area, and instantiates it with a new English language dataset, ZEST, structured for task-oriented evaluation on unseen tasks. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 29 REFERENCES
Learning to Reason: End-to-End Module Networks for Visual Question Answering
TLDR
End-to-End Module Networks are proposed, which learn to reason by directly predicting instance-specific network layouts without the aid of a parser, and achieve an error reduction of nearly 50% relative to state-of-theart attentional approaches. Expand
Neural Compositional Denotational Semantics for Question Answering
TLDR
An end-to-end differentiable model for interpreting questions about a knowledge graph (KG), which is inspired by formal approaches to semantics, and generalizes well to longer questions than seen in its training data, in contrast to RNN. Expand
Learning a Natural Language Interface with Neural Programmer
TLDR
This paper presents the first weakly supervised, end-to-end neural network model to induce such programs on a real-world dataset, and enhances the objective function of Neural Programmer, a neural network with built-in discrete operations, and applies it on WikiTableQuestions, a natural language question-answering dataset. Expand
Neural Module Networks
TLDR
A procedure for constructing and learning neural module networks, which compose collections of jointly-trained neural "modules" into deep networks for question answering, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). Expand
Neural Semantic Parsing with Type Constraints for Semi-Structured Tables
TLDR
A new semantic parsing model for answering compositional questions on semi-structured Wikipedia tables with a state-of-the-art accuracy and type constraints and entity linking are valuable components to incorporate in neural semantic parsers. Expand
DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs
TLDR
A new reading comprehension benchmark, DROP, which requires Discrete Reasoning Over the content of Paragraphs, and presents a new model that combines reading comprehension methods with simple numerical reasoning to achieve 51% F1. Expand
A Multi-Type Multi-Span Network for Reading Comprehension that Requires Discrete Reasoning
TLDR
The Multi-Type Multi-Span Network (MTMSN) is introduced, a neural reading comprehension model that combines a multi-type answer predictor designed to support various answer types with amulti-span extraction method for dynamically producing one or multiple text spans. Expand
CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning
TLDR
This work presents a diagnostic dataset that tests a range of visual reasoning abilities and uses this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations. Expand
CommonsenseQA: A Question Answering Challenge Targeting Commonsense Knowledge
TLDR
This work presents CommonsenseQA: a challenging new dataset for commonsense question answering, which extracts from ConceptNet multiple target concepts that have the same semantic relation to a single source concept. Expand
HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering
TLDR
It is shown that HotpotQA is challenging for the latest QA systems, and the supporting facts enable models to improve performance and make explainable predictions. Expand
...
1
2
3
...