oLMpics-On What Language Model Pre-training Captures

@article{Talmor2020oLMpicsOnWL,
  title={oLMpics-On What Language Model Pre-training Captures},
  author={Alon Talmor and Yanai Elazar and Yoav Goldberg and Jonathan Berant},
  journal={Transactions of the Association for Computational Linguistics},
  year={2020},
  volume={8},
  pages={743-758}
}
Abstract Recent success of pre-trained language models (LMs) has spurred widespread interest in the language capabilities that they possess. However, efforts to understand whether LM representations are useful for symbolic reasoning tasks have been limited and scattered. In this work, we propose eight reasoning tasks, which conceptually require operations such as comparison, conjunction, and composition. A fundamental challenge is to understand whether the performance of a LM on a task should… Expand
Explaining Question Answering Models through Text Generation
A Mathematical Exploration of Why Language Models Help Solve Downstream Tasks
Can RoBERTa Reason? A Systematic Approach to Probe Logical Reasoning in Language Models
  • 2020
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 67 REFERENCES
Learning and Evaluating General Linguistic Intelligence
Language Models are Unsupervised Multitask Learners
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Dissecting Contextual Word Embeddings: Architecture and Representation
Assessing the Ability of LSTMs to Learn Syntax-Sensitive Dependencies
Investigating BERT’s Knowledge of Language: Five Analysis Methods with NPIs
Show Your Work: Improved Reporting of Experimental Results
Probing Natural Language Inference Models through Semantic Fragments
RoBERTa: A Robustly Optimized BERT Pretraining Approach
...
1
2
3
4
5
...