Corpus ID: 219573621

Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge

@article{Talmor2020TeachingPM,
  title={Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge},
  author={Alon Talmor and Oyvind Tafjord and P. Clark and Y. Goldberg and Jonathan Berant},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.06609}
}
  • Alon Talmor, Oyvind Tafjord, +2 authors Jonathan Berant
  • Published 2020
  • Computer Science
  • ArXiv
  • To what extent can a neural network systematically reason over symbolic facts? Evidence suggests that large pre-trained language models (LMs) acquire some reasoning capacity, but this ability is difficult to control. Recently, it has been shown that Transformer-based models succeed in consistent reasoning over explicit symbolic facts, under a "closed-world" assumption. However, in an open-domain setup, it is desirable to tap into the vast reservoir of implicit knowledge already encoded in the… CONTINUE READING
    2 Citations

    Figures and Tables from this paper.

    References

    SHOWING 1-10 OF 43 REFERENCES
    oLMpics - On what Language Model Pre-training Captures
    • 51
    • PDF
    Transformers as Soft Reasoners over Language
    • 19
    • PDF
    Analysing Mathematical Reasoning Abilities of Neural Models
    • 62
    • PDF
    Enhanced LSTM for Natural Language Inference
    • 510
    • Highly Influential
    • PDF
    What Does My QA Model Know? Devising Controlled Probes Using Expert Knowledge
    • 12
    • PDF
    Probing Natural Language Inference Models through Semantic Fragments
    • 28
    • PDF
    Compositional Generalization for Primitive Substitutions
    • 13
    • PDF
    BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    • 11,689
    • PDF
    A large annotated corpus for learning natural language inference
    • 1,555
    • PDF
    UnifiedQA: Crossing Format Boundaries With a Single QA System
    • 22
    • PDF