Corpus ID: 231740560

Measuring and Improving Consistency in Pretrained Language Models

@article{Elazar2021MeasuringAI,
  title={Measuring and Improving Consistency in Pretrained Language Models},
  author={Yanai Elazar and Nora Kassner and Shauli Ravfogel and Abhilasha Ravichander and E. Hovy and Hinrich Schutze and Y. Goldberg},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.01017}
}
  • Yanai Elazar, Nora Kassner, +4 authors Y. Goldberg
  • Published 2021
  • Computer Science
  • ArXiv
  • Consistency of a model — that is, the invariance of its behavior under meaning-preserving alternations in its input — is a highly desirable property in natural language processing. In this paper we study the question: Are Pretrained Language Models (PLMs) consistent with respect to factual knowledge? To this end, we create PARAREL , a high-quality resource of cloze-style query English paraphrases. It contains a total of 328 paraphrases for thirty-eight relations. Using PARAREL , we show that… CONTINUE READING

    Figures and Tables from this paper

    References

    SHOWING 1-10 OF 79 REFERENCES
    How Context Affects Language Models' Factual Predictions
    • 15
    • PDF
    Logic-Guided Data Augmentation and Regularization for Consistent Question Answering
    • 12
    • PDF
    Evaluating the Factual Consistency of Abstractive Text Summarization
    • 46
    • Highly Influential
    • PDF
    Language Models as Knowledge Bases?
    • 228
    • Highly Influential
    • PDF
    oLMpics-On What Language Model Pre-training Captures
    • 63
    • PDF
    Targeted Syntactic Evaluation of Language Models
    • 108
    • PDF
    How Can We Accelerate Progress Towards Human-like Linguistic Generalization?
    • 31
    • PDF
    A large annotated corpus for learning natural language inference
    • 1,680
    • PDF
    Injecting Numerical Reasoning Skills into Language Models
    • 21
    • PDF