Measuring and Improving Consistency in Pretrained Language Models
@article{Elazar2021MeasuringAI, title={Measuring and Improving Consistency in Pretrained Language Models}, author={Yanai Elazar and Nora Kassner and Shauli Ravfogel and Abhilasha Ravichander and E. Hovy and H. Schutze and Yoav Goldberg}, journal={ArXiv}, year={2021}, volume={abs/2102.01017} }
Consistency of a model — that is, the invariance of its behavior under meaning-preserving alternations in its input — is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge? To this end, we create PARAREL , a high-quality resource of cloze-style query English paraphrases. It contains a total of 328 paraphrases for thirty-eight relations. Using PARAREL , we show that… Expand
Figures and Tables from this paper
One Citation
Relational world knowledge representation in contextual language models: A review
- Computer Science
- 2021
- PDF
References
SHOWING 1-10 OF 79 REFERENCES
Logic-Guided Data Augmentation and Regularization for Consistent Question Answering
- Computer Science
- ACL
- 2020
- 14
- PDF
Pretrained Encyclopedia: Weakly Supervised Knowledge-Pretrained Language Model
- Computer Science
- ICLR
- 2020
- 44
- PDF
Evaluating the Factual Consistency of Abstractive Text Summarization
- Computer Science
- EMNLP
- 2020
- 61
- Highly Influential
- PDF
oLMpics-On What Language Model Pre-training Captures
- Computer Science
- Transactions of the Association for Computational Linguistics
- 2019
- 72
- PDF
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
- Computer Science
- ACL
- 2020
- 35
- PDF