• Corpus ID: 219573265

# Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning

@inproceedings{Li2020ClosedLN,
title={Closed Loop Neural-Symbolic Learning via Integrating Neural Perception, Grammar Parsing, and Symbolic Reasoning},
author={Qing Li and Siyuan Huang and Yining Hong and Yixin Chen and Ying Nian Wu and Song-Chun Zhu},
booktitle={International Conference on Machine Learning},
year={2020}
}
• Published in
International Conference on…
11 June 2020
• Computer Science
The goal of neural-symbolic computation is to integrate the connectionist and symbolist paradigms. Prior methods learn the neural-symbolic models using reinforcement learning (RL) approaches, which ignore the error propagation in the symbolic reasoning module and thus converge slowly with sparse rewards. In this paper, we address these issues and close the loop of neural-symbolic learning by (1) introducing the \textbf{grammar} model as a \textit{symbolic prior} to bridge neural perception and…
38 Citations

## Figures and Tables from this paper

• Computer Science
IJCAI
• 2021
MetaAbd is the first system that can jointly learn neural networks from scratch and induce recursive first-order logic theories with predicate invention and experimental results demonstrate that MetaAbd not only outperforms the compared systems in predictive accuracy and data efficiency.
• Computer Science, Linguistics
2021 IEEE/CVF International Conference on Computer Vision (ICCV)
• 2021
This work presents VLGrammar, a method that uses compound probabilistic context-free grammars (compound PCFGs) to induce the language grammar and the image grammar simultaneously, and proposes a novel contrastive learning framework to guide the joint learning of both modules.
• Computer Science
AAAI
• 2021
This paper proposes a novel learning-by-fixing (LBF) framework, which corrects the misperceptions of the neural network via symbolic reasoning and achieves comparable top-1 and much better top-3/5 answer accuracies than fully-supervised methods.
• Computer Science
• 2021
The Dynamic Concept Learner (DCL), a unified framework that grounds physical objects and events from video and language, achieves state-of-the-art performance on CLEVRER, a challenging causal video reasoning dataset, even without using ground-truth attributes and collision labels from simulations for training.
• Computer Science
ICLR
• 2021
The Dynamic Concept Learner is presented, a unified framework that grounds physical objects and events from dynamic scenes and language and achieves state-of-the-art performance on CLEVRER, a challenging causal video reasoning dataset, even without using ground-truth attributes and collision labels from simulations for training.
• Computer Science
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2021
This work introduces the Abstract Causal REasoning (ACRE) dataset for systematic evaluation of current vision systems in causal induction and notices that pure neural models tend towards an associative strategy under their chance-level performance, whereas neuro-symbolic combinations struggle in backward-blocking reasoning.
• Computer Science
ArXiv
• 2021
This work extends existing models to leverage these soft programs and scene graphs to train on question answer pairs in an end-to-end manner and finds that representing the text as probabilistic programs and images as object-level scene graphs best satisfy desiderata.
• Computer Science
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2021
A neuro-symbolic Probabilistic Abduction and Execution (PrAE) learner is proposed, central to the process of probabilistic abduction and execution on a probabilism scene representation, akin to the mental manipulation of objects, that improves cross-configuration generalization and is capable of rendering an answer.
Neuro-symbolic learning generally consists of two separated worlds, i.e
A novel, softened symbol grounding process is presented, enabling the interactions of the two worlds in a mutually beneficial manner, and successfully solves problems well beyond the frontier of the existing proposals.

## References

SHOWING 1-10 OF 69 REFERENCES

• Computer Science
ICLR
• 2019
We propose the Neuro-Symbolic Concept Learner (NS-CL), a model that learns visual concepts, words, and semantic parsing of sentences without explicit supervision on any of them; instead, our model
• Computer Science
ACL
• 2017
A Neural Symbolic Machine is introduced, which contains a neural “programmer” that maps language utterances to programs and utilizes a key-variable memory to handle compositionality, and a symbolic “computer”, i.e., a Lisp interpreter that performs program execution, and helps find good programs by pruning the search space.
• Computer Science, Psychology
ArXiv
• 2017
This joint survey reviews the personal ideas and views of several researchers on neural-symbolic learning and reasoning and presents the challenges facing the area and avenues for further research.
• Computer Science
NeurIPS
• 2018
This work proposes a neural-symbolic visual question answering system that first recovers a structural scene representation from the image and a program trace from the question, then executes the program on the scene representation to obtain an answer.
• Computer Science
ACL
• 2017
The goal is to learn a semantic parser that maps natural language utterances into executable programs when only indirect supervision is available, and a new algorithm is presented that guards against spurious programs by combining the systematic search traditionally employed in MML with the randomized exploration of RL.
• Computer Science
Cognitive Technologies
• 2009
This book is the first to offer a self-contained presentation of neural network models for a number of computer science logics, including modal, temporal, and epistemic logics and focuses on the benefits of integrating effective robust learning with expressive reasoning capabilities.
• Computer Science
NeurIPS
• 2019
The abductive learning targeted at unifying the two AI paradigms in a mutually beneficial way is presented, where the machine learning model learns to perceive primitive logic facts from data, while logical reasoning can exploit symbolic domain knowledge and correct the wrongly perceived facts for improving the machinelearning models.
• Computer Science
2017 IEEE International Conference on Computer Vision (ICCV)
• 2017
A model for visual reasoning that consists of a program generator that constructs an explicit representation of the reasoning process to be performed, and an execution engine that executes the resulting program to produce an answer is proposed.
• Computer Science
ICML
• 2019
A new class of probabilistic neural-symbolic models, that have symbolic functional programs as a latent, stochastic variable, that are more understandable while requiring lesser number of teaching examples for VQA is proposed.
• Computer Science
AAAI
• 2018
This is the first attempt of applying deep reinforcement learning to solve arithmetic word problems and yields remarkable improvement on most of datasets and boosts the average precision among all the benchmark datasets by 15\%.