• Corpus ID: 219573265

Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning

@inproceedings{Li2020ClosedLN,
  title={Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning},
  author={Qing Li and Siyuan Huang and Yining Hong and Yixin Chen and Ying Nian Wu and Song-Chun Zhu},
  booktitle={International Conference on Machine Learning},
  year={2020}
}
The goal of neural-symbolic computation is to integrate the connectionist and symbolist paradigms. Prior methods learn the neural-symbolic models using reinforcement learning (RL) approaches, which ignore the error propagation in the symbolic reasoning module and thus converge slowly with sparse rewards. In this paper, we address these issues and close the loop of neural-symbolic learning by (1) introducing the \textbf{grammar} model as a \textit{symbolic prior} to bridge neural perception and… 

Figures and Tables from this paper

Abductive Knowledge Induction From Raw Data

MetaAbd is the first system that can jointly learn neural networks from scratch and induce recursive first-order logic theories with predicate invention and experimental results demonstrate that MetaAbd not only outperforms the compared systems in predictive accuracy and data efficiency.

VLGrammar: Grounded Grammar Induction of Vision and Language

This work presents VLGrammar, a method that uses compound probabilistic context-free grammars (compound PCFGs) to induce the language grammar and the image grammar simultaneously, and proposes a novel contrastive learning framework to guide the joint learning of both modules.

Learning by Fixing: Solving Math Word Problems with Weak Supervision

This paper proposes a novel learning-by-fixing (LBF) framework, which corrects the misperceptions of the neural network via symbolic reasoning and achieves comparable top-1 and much better top-3/5 answer accuracies than fully-supervised methods.

EVENTS THROUGH DYNAMIC VISUAL REASONING

The Dynamic Concept Learner (DCL), a unified framework that grounds physical objects and events from video and language, achieves state-of-the-art performance on CLEVRER, a challenging causal video reasoning dataset, even without using ground-truth attributes and collision labels from simulations for training.

Grounding Physical Concepts of Objects and Events Through Dynamic Visual Reasoning

The Dynamic Concept Learner is presented, a unified framework that grounds physical objects and events from dynamic scenes and language and achieves state-of-the-art performance on CLEVRER, a challenging causal video reasoning dataset, even without using ground-truth attributes and collision labels from simulations for training.

ACRE: Abstract Causal REasoning Beyond Covariation

This work introduces the Abstract Causal REasoning (ACRE) dataset for systematic evaluation of current vision systems in causal induction and notices that pure neural models tend towards an associative strategy under their chance-level performance, whereas neuro-symbolic combinations struggle in backward-blocking reasoning.

How to Design Sample and Computationally Efficient VQA Models

This work extends existing models to leverage these soft programs and scene graphs to train on question answer pairs in an end-to-end manner and finds that representing the text as probabilistic programs and images as object-level scene graphs best satisfy desiderata.

Abstract Spatial-Temporal Reasoning via Probabilistic Abduction and Execution

A neuro-symbolic Probabilistic Abduction and Execution (PrAE) learner is proposed, central to the process of probabilistic abduction and execution on a probabilism scene representation, akin to the mental manipulation of objects, that improves cross-configuration generalization and is capable of rendering an answer.

YMBOL G ROUNDING FOR N EURO SYMBOLIC

Neuro-symbolic learning generally consists of two separated worlds, i.e

N EURO SYMBOLIC

A novel, softened symbol grounding process is presented, enabling the interactions of the two worlds in a mutually beneficial manner, and successfully solves problems well beyond the frontier of the existing proposals.

References

SHOWING 1-10 OF 69 REFERENCES

The Neuro-Symbolic Concept Learner: Interpreting Scenes Words and Sentences from Natural Supervision

We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model

Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision

A Neural Symbolic Machine is introduced, which contains a neural “programmer” that maps language utterances to programs and utilizes a key-variable memory to handle compositionality, and a symbolic “computer”, i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space.

Neural-Symbolic Learning and Reasoning: A Survey and Interpretation

This joint survey reviews the personal ideas and views of several researchers on neural-symbolic learning and reasoning and presents the challenges facing the area and avenues for further research.

Neural-Symbolic VQA: Disentangling Reasoning from Vision and Language Understanding

This work proposes a neural-symbolic visual question answering system that first recovers a structural scene representation from the image and a program trace from the question, then executes the program on the scene representation to obtain an answer.

From Language to Programs: Bridging Reinforcement Learning and Maximum Marginal Likelihood

The goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available, and a new algorithm is presented that guards against spurious programs by combining the systematic search traditionally employed in MML with the randomized exploration of RL.

Neural-Symbolic Cognitive Reasoning

This book is the first to offer a self-contained presentation of neural network models for a number of computer science logics, including modal, temporal, and epistemic logics and focuses on the benefits of integrating effective robust learning with expressive reasoning capabilities.

Bridging Machine Learning and Logical Reasoning by Abductive Learning

The abductive learning targeted at unifying the two AI paradigms in a mutually beneficial way is presented, where the machine learning model learns to perceive primitive logic facts from data, while logical reasoning can exploit symbolic domain knowledge and correct the wrongly perceived facts for improving the machinelearning models.

Inferring and Executing Programs for Visual Reasoning

A model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer is proposed.

Probabilistic Neural-symbolic Models for Interpretable Visual Question Answering

A new class of probabilistic neural-symbolic models, that have symbolic functional programs as a latent, stochastic variable, that are more understandable while requiring lesser number of teaching examples for VQA is proposed.

MathDQN: Solving Arithmetic Word Problems via Deep Reinforcement Learning

This is the first attempt of applying deep reinforcement learning to solve arithmetic word problems and yields remarkable improvement on most of datasets and boosts the average precision among all the benchmark datasets by 15\%.
...