# Learning Symbolic Rules for Reasoning in Quasi-Natural Language

@article{Yang2021LearningSR, title={Learning Symbolic Rules for Reasoning in Quasi-Natural Language}, author={Kaiyu Yang and Jia Deng}, journal={ArXiv}, year={2021}, volume={abs/2111.12038} }

Symbolic reasoning, rule-based symbol manipulation, is a hallmark of human intelligence. However, rule-based systems have had limited success competing with learning-based systems outside formalized domains such as automated theorem proving. We hypothesize that this is due to the manual construction of rules in past attempts. In this work, we ask how we can build a rule-based system that can reason with natural language input but without the manual construction of rules. We propose MetaQNL, a…

## 5 Citations

### Language Models as Inductive Reasoners

- Computer ScienceArXiv
- 2022

This work proposes a new task, which is to induce natural language rules from nat- ural language facts, and creates a dataset termed DEER containing 1.2k rule-fact pairs for the task, where rules and facts are written in natural language.

### Generating Natural Language Proofs with Verifier-Guided Search

- Computer ScienceArXiv
- 2022

A novel stepwise method, NLProofS (Natural Language Proof Search), which learns to generate relevant steps conditioning on the hypothesis, which improves the correctness of predicted proofs from 27.7% to 33.3% in the distractor setting of EntailmentBank, demonstrating the effectiveness of NL proofS in generating challenging human-authored proofs.

### 1 Machine Learning for Reasoning

- Computer Science
- 2021

This research advances towards the long-term goal of building machines that reason precisely, systematically, in ways that are interpretable and robust to ambiguity in real-world environments by attempting to combine the complementary strengths of machine learning and symbolic reasoning.

### Natural Language Deduction with Incomplete Information

- Computer ScienceEMNLP
- 2022

This work proposes a new system that can handle the underspecified setting where not all premises are stated at the outset; that is, additional assumptions need to be materialized to prove a claim.

### Can Pretrained Language Models (Yet) Reason Deductively?

- Computer ScienceArXiv
- 2022

It is suggested that PLMs cannot yet perform reliable deductive reasoning, demonstrating the importance of controlled examinations and probing of PLMs’ reasoning abilities; the results reach beyond (misleading) task performance, revealing thatPLMs are still far from human-level reasoning capabilities, even for simple deductive tasks.

## References

SHOWING 1-10 OF 74 REFERENCES

### NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language

- Computer ScienceACL
- 2019

A model combining neural networks with logic programming in a novel manner for solving multi-hop reasoning tasks over natural language by using an Prolog prover to utilize a similarity function over pretrained sentence encoders and fine-tune the representations for the similarity function via backpropagation.

### Learning Compositional Rules via Neural Program Synthesis

- Computer ScienceNeurIPS
- 2020

This work presents a neuro-symbolic model which learns entire rule systems from a small set of examples, and outperforms neural meta-learning techniques in three domains: an artificial instruction-learning domain used to evaluate human learning, the SCAN challenge datasets, and learning rule-based translations of number words into integers for a wide range of human languages.

### Transformers as Soft Reasoners over Language

- Computer ScienceIJCAI
- 2020

This work trains transformers to reason (or emulate reasoning) over natural language sentences using synthetically generated data, thus bypassing a formal representation and suggesting a new role for transformers, namely as limited "soft theorem provers" operating over explicit theories in language.

### PRover: Proof Generation for Interpretable Reasoning over Rules

- Computer ScienceEMNLP
- 2020

This work proposes PROVER, an interpretable transformer-based model that jointly answers binary questions over rule-bases and generates the corresponding proofs, and learns to predict nodes and edges corresponding to proof graphs in an efficient constrained training paradigm.

### Differentiable Reasoning on Large Knowledge Bases and Natural Language

- Computer ScienceKnowledge Graphs for eXplainable Artificial Intelligence
- 2020

Greedy NTPs are proposed, an extension to NTP's addressing their complexity and scalability limitations, thus making them applicable to real-world datasets and a novel approach for jointly reasoning over KBs and textual mentions by embedding logic facts and natural language sentences in a shared embedding space.

### ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language

- Computer ScienceFINDINGS
- 2021

This work shows that a generative model, called ProofWriter, can reliably generate both implications of a theory and the natural language proofs that support them, and shows that generative techniques can perform a type of abduction with high precision.

### Flexible Generation of Natural Language Deductions

- Computer ScienceEMNLP
- 2021

ParaPattern is described, a method for building models to generate deductive inferences from diverse natural language inputs without direct human supervision that achieves 85% validity on examples of the ‘substitution’ operation from EntailmentBank without the use of any in-domain training data.

### Measuring Systematic Generalization in Neural Proof Generation with Transformers

- Computer ScienceNeurIPS
- 2020

It is observed that models that are not trained to generate proofs are better at generalizing to problems based on longer proofs, which suggests that Transformers have efficient internal reasoning strategies that are harder to interpret.

### A Generative Symbolic Model for More General Natural Language Understanding and Reasoning

- Computer ScienceArXiv
- 2021

A new fully-symbolic Bayesian model of semantic parsing and reasoning is presented which is fully interpretable and Bayesian, designed specifically with generality in mind, and therefore provides a clearer path for future research to expand its capabilities.

### Natural Logic for Textual Inference

- Computer ScienceACL-PASCAL@ACL
- 2007

This paper presents the first use of a computational model of natural logic---a system of logical inference which operates over natural language---for textual inference, and provides the first reported results for any system on the FraCaS test suite.