• Corpus ID: 237572003

ReaSCAN: Compositional Reasoning in Language Grounding

@article{Wu2021ReaSCANCR,
  title={ReaSCAN: Compositional Reasoning in Language Grounding},
  author={Zhengxuan Wu and Elisa Kreiss and Desmond C. Ong and Christopher Potts},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.08994}
}
The ability to compositionally map language to referents, relations, and actions is an essential component of language understanding. The recent gSCAN dataset (Ruis et al. 2020, NeurIPS) is an inspiring attempt to assess the capacity of models to learn this kind of grounding in scenarios involving navigational instructions. However, we show that gSCAN’s highly constrained design means that it does not require compositional interpretation and that many details of its instructions and scenarios… 

Figures and Tables from this paper

Dyna-bAbI: unlocking bAbI's potential with dynamic synthetic benchmarking
TLDR
Dyna-bAbI is developed, a dynamic framework providing fine-grained control over task generation in bAbI, underscoring the importance of highly controllable task generators for creating robust NLU systems through a virtuous cycle of model and data development.
Relational reasoning and generalization using non-symbolic neural networks
TLDR
Findings indicate that neural models are able to solve equality-based reasoning tasks, suggesting that essential aspects of symbolic reasoning can emerge from data-driven, non-symbolic learning processes.
Pushing the Limits of Rule Reasoning in Transformers through Natural Language Satisfiability
TLDR
It is found that current transformers, given sufficient training data, are surprisingly robust at solving the resulting NLSat problems of substantially increased difficulty and exhibit some degree of scale-invariance—the ability to generalize to problems of larger size and scope.
Inducing Causal Structure for Interpretable Neural Networks
TLDR
The new method of interchange intervention training (IIT) is presented, which align variables in the causal model with representations in the neural model and trains a neural model to match the counterfactual behavior of the causalmodel on a base input when aligned representations in both models are set to be the value they would be for a second source input.

References

SHOWING 1-10 OF 56 REFERENCES
Think before you act: A simple baseline for compositional generalization
TLDR
This work proposes an attention-inspired modification of the baseline model from Ruis et al. 2020, together with an auxiliary loss, that takes into account the sequential nature of steps (i) and (ii), and finds that two compositional tasks are trivially solved with this approach.
Compositional Attention Networks for Machine Reasoning
TLDR
The MAC network is presented, a novel fully differentiable neural network architecture, designed to facilitate explicit and expressive reasoning that is computationally-efficient and data-efficient, in particular requiring 5x less data than existing models to achieve strong results.
A Benchmark for Systematic Generalization in Grounded Language Understanding
TLDR
A new benchmark, gSCAN, is introduced for evaluating compositional generalization in models of situated language understanding, taking inspiration from standard models of meaning composition in formal linguistics and defining a language grounded in the states of a grid world.
GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering
We introduce GQA, a new dataset for real-world visual reasoning and compositional question answering, seeking to address key shortcomings of previous VQA datasets. We have developed a strong and
CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning
TLDR
This work presents a diagnostic dataset that tests a range of visual reasoning abilities and uses this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.
Analyzing Compositionality-Sensitivity of NLI Models
TLDR
This work proposes a compositionality-sensitivity testing setup that analyzes models on natural examples from existing datasets that cannot be solved via lexical features alone, hence revealing the models' actual compositionality awareness.
Learning to Compose and Reason with Language Tree Structures for Visual Grounding
TLDR
A natural language grounding model that can automatically compose a binary tree structure for parsing the language and then perform visual reasoning along the tree in a bottom-up fashion and achieves the state-of-the-art performance with more explainable reasoning is proposed.
CLOSURE: Assessing Systematic Generalization of CLEVR Models
TLDR
Surprisingly, it is found that an explicitly compositional Neural Module Network model also generalizes badly on CLOSURE, even when it has access to the ground-truth programs at test time.
Vision-Language Navigation With Self-Supervised Auxiliary Reasoning Tasks
TLDR
AuxRN is introduced, a framework with four self-supervised auxiliary reasoning tasks to exploit the additional training signals derived from these semantic information that help the agent to acquire knowledge of semantic representations in order to reason about its activities and build a thorough perception of environments.
Systematic Generalization on gSCAN with Language Conditioned Embedding
TLDR
This model is the first one that significantly outperforms the provided baseline and reaches state-of-the-art performance on grounded SCAN, a grounded natural language navigation dataset designed to require systematic generalization in its test splits.
...
...