Corpus ID: 231740560

Measuring and Improving Consistency in Pretrained Language Models

@article{Elazar2021MeasuringAI,
  title={Measuring and Improving Consistency in Pretrained Language Models},
  author={Yanai Elazar and Nora Kassner and Shauli Ravfogel and Abhilasha Ravichander and E. Hovy and H. Schutze and Yoav Goldberg},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.01017}
}
Consistency of a model — that is, the invariance of its behavior under meaning-preserving alternations in its input — is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge? To this end, we create PARAREL , a high-quality resource of cloze-style query English paraphrases. It contains a total of 328 paraphrases for thirty-eight relations. Using PARAREL , we show that… Expand
1 Citations

Figures and Tables from this paper

References

SHOWING 1-10 OF 79 REFERENCES
How Context Affects Language Models' Factual Predictions
  • 19
  • PDF
Logic-Guided Data Augmentation and Regularization for Consistent Question Answering
  • 14
  • PDF
Evaluating the Factual Consistency of Abstractive Text Summarization
  • 61
  • Highly Influential
  • PDF
Language Models as Knowledge Bases?
  • 253
  • Highly Influential
  • PDF
oLMpics-On What Language Model Pre-training Captures
  • 72
  • PDF
Targeted Syntactic Evaluation of Language Models
  • 114
  • PDF
How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
  • 35
  • PDF
A large annotated corpus for learning natural language inference
  • 1,731
  • PDF
Injecting Numerical Reasoning Skills into Language Models
  • 26
  • PDF
...
1
2
3
4
5
...