Measuring Systematic Generalization in Neural Proof Generation with Transformers
@article{Gontier2020MeasuringSG, title={Measuring Systematic Generalization in Neural Proof Generation with Transformers}, author={Nicolas Gontier and Koustuv Sinha and Siva Reddy and C. Pal}, journal={ArXiv}, year={2020}, volume={abs/2009.14786} }
We are interested in understanding how well Transformer language models (TLMs) can perform reasoning tasks when trained on knowledge encoded in the form of natural language. We investigate their systematic generalization abilities on a logical reasoning task in natural language, which involves reasoning over relationships between entities grounded in first-order logical proofs. Specifically, we perform soft theorem-proving by leveraging TLMs to generate natural language proofs. We test the… CONTINUE READING
Figures and Tables from this paper
2 Citations
ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language
- Computer Science
- ArXiv
- 2020
- PDF
References
SHOWING 1-10 OF 39 REFERENCES
Towards Proof Synthesis Guided by Neural Machine Translation for Intuitionistic Propositional Logic
- Computer Science
- ArXiv
- 2017
- 8
- PDF
NLProlog: Reasoning with Weak Unification for Question Answering in Natural Language
- Computer Science
- ACL
- 2019
- 22
- PDF
Differentiable Reasoning on Large Knowledge Bases and Natural Language
- Computer Science
- Knowledge Graphs for eXplainable Artificial Intelligence
- 2020
- 20
- PDF
Analyzing machine-learned representations: A natural language case study
- Computer Science, Medicine
- Cogn. Sci.
- 2020
- 3
- PDF
Linguistic generalization and compositionality in modern artificial neural networks
- Computer Science, Medicine
- Philosophical Transactions of the Royal Society B
- 2019
- 36
- PDF
Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
- Computer Science, Mathematics
- ACL
- 2019
- 816
- PDF
CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text
- Computer Science, Mathematics
- EMNLP/IJCNLP
- 2019
- 21
- PDF
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
- Computer Science
- NAACL-HLT
- 2019
- 13,754
- PDF