• Computer Science
  • Published in ArXiv 2019

oLMpics - On what Language Model Pre-training Captures

@article{Talmor2019oLMpicsO,
  title={oLMpics - On what Language Model Pre-training Captures},
  author={Alon Talmor and Yanai Elazar and Yoav Goldberg and Jonathan Berant},
  journal={ArXiv},
  year={2019},
  volume={abs/1912.13283}
}
Recent success of pre-trained language models (LMs) has spurred widespread interest in the language capabilities that they possess. However, efforts to understand whether LM representations are useful for symbolic reasoning tasks have been limited and scattered. In this work, we propose eight reasoning tasks, which conceptually require operations such as comparison, conjunction, and composition. A fundamental challenge is to understand whether the performance of a LM on a task should be… CONTINUE READING

References

Publications referenced by this paper.
SHOWING 1-10 OF 47 REFERENCES

Deep contextualized word representations Dissecting contextual word embeddings : Architecture and representation

Alec Radford, Jeffrey Wu, +3 authors Ilya Sutskever
  • Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
  • 2019
VIEW 1 EXCERPT