Corpus ID: 222067015

Measuring Systematic Generalization in Neural Proof Generation with Transformers

@article{Gontier2020MeasuringSG,
  title={Measuring Systematic Generalization in Neural Proof Generation with Transformers},
  author={Nicolas Gontier and Koustuv Sinha and Siva Reddy and C. Pal},
  journal={ArXiv},
  year={2020},
  volume={abs/2009.14786}
}
  • Nicolas Gontier, Koustuv Sinha, +1 author C. Pal
  • Published 2020
  • Computer Science, Mathematics
  • ArXiv
  • We are interested in understanding how well Transformer language models (TLMs) can perform reasoning tasks when trained on knowledge encoded in the form of natural language. We investigate their systematic generalization abilities on a logical reasoning task in natural language, which involves reasoning over relationships between entities grounded in first-order logical proofs. Specifically, we perform soft theorem-proving by leveraging TLMs to generate natural language proofs. We test the… CONTINUE READING
    4 Citations

    Figures and Tables from this paper

    ProofWriter: Generating Implications, Proofs, and Abductive Statements over Natural Language
    • PDF
    Transition based Graph Decoder for Neural Machine Translation
    • PDF
    Critical Thinking for Language Models
    • PDF
    Systematic Generalization for Predictive Control in Multivariate Time Series
    • PDF

    References

    SHOWING 1-10 OF 39 REFERENCES
    Transformers as Soft Reasoners over Language
    • 24
    • PDF
    Towards Proof Synthesis Guided by Neural Machine Translation for Intuitionistic Propositional Logic
    • 9
    • PDF
    Differentiable Reasoning on Large Knowledge Bases and Natural Language
    • 21
    • PDF
    Analyzing machine-learned representations: A natural language case study
    • 3
    • PDF
    End-to-end Differentiable Proving
    • 161
    • PDF
    Linguistic generalization and compositionality in modern artificial neural networks
    • M. Baroni
    • Computer Science, Medicine
    • Philosophical Transactions of the Royal Society B
    • 2019
    • 38
    • PDF
    Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
    • 867
    • PDF
    CLUTRR: A Diagnostic Benchmark for Inductive Reasoning from Text
    • 24
    • PDF
    BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    • 14,652
    • PDF