Corpus ID: 209515274

oLMpics - On what Language Model Pre-training Captures

@article{Talmor2019oLMpicsO,
  title={oLMpics - On what Language Model Pre-training Captures},
  author={Alon Talmor and Yanai Elazar and Y. Goldberg and Jonathan Berant},
  journal={ArXiv},
  year={2019},
  volume={abs/1912.13283}
}
  • Alon Talmor, Yanai Elazar, +1 author Jonathan Berant
  • Published 2019
  • Computer Science
  • ArXiv
  • Recent success of pre-trained language models (LMs) has spurred widespread interest in the language capabilities that they possess. However, efforts to understand whether LM representations are useful for symbolic reasoning tasks have been limited and scattered. In this work, we propose eight reasoning tasks, which conceptually require operations such as comparison, conjunction, and composition. A fundamental challenge is to understand whether the performance of a LM on a task should be… CONTINUE READING

    Figures, Tables, and Topics from this paper.

    Paper Mentions

    Explaining Question Answering Models through Text Generation
    1
    Teaching Pre-Trained Models to Systematically Reason Over Implicit Knowledge
    Can RoBERTa Reason? A Systematic Approach to Probe Logical Reasoning in Language Models
    • 2020
    CodeBERT: A Pre-Trained Model for Programming and Natural Languages
    17

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 47 REFERENCES
    Learning and Evaluating General Linguistic Intelligence
    61
    Language Models are Unsupervised Multitask Learners
    1537
    BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    8952
    Dissecting Contextual Word Embeddings: Architecture and Representation
    136
    Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
    357
    What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models
    33
    XLNet: Generalized Autoregressive Pretraining for Language Understanding
    1192
    Investigating BERT's Knowledge of Language: Five Analysis Methods with NPIs
    24
    Show Your Work: Improved Reporting of Experimental Results
    35
    Probing Natural Language Inference Models through Semantic Fragments
    18