Corpus ID: 212411919

How Context Affects Language Models' Factual Predictions

@article{Petroni2020HowCA,
  title={How Context Affects Language Models' Factual Predictions},
  author={Fabio Petroni and Patrick Lewis and Aleksandra Piktus and Tim Rocktaschel and Yuxiang Wu and Alexander H. Miller and Sebastian Riedel},
  journal={ArXiv},
  year={2020},
  volume={abs/2005.04611}
}
  • Fabio Petroni, Patrick Lewis, +4 authors Sebastian Riedel
  • Published 2020
  • Computer Science
  • ArXiv
  • When pre-trained on large unsupervised textual corpora, language models are able to store and retrieve factual knowledge to some extent, making it possible to use them directly for zero-shot cloze-style question answering. However, storing factual knowledge in a fixed number of weights of a language model clearly has limitations. Previous approaches have successfully provided access to information outside the model weights using supervised architectures that combine an information retrieval… CONTINUE READING

    Citations

    Publications citing this paper.

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 57 REFERENCES

    Language Models as Knowledge Bases?

    VIEW 5 EXCERPTS