Corpus ID: 232168628

BERTese: Learning to Speak to BERT

@inproceedings{Haviv2021BERTeseLT,
  title={BERTese: Learning to Speak to BERT},
  author={Adi Haviv and Jonathan Berant and A. Globerson},
  booktitle={EACL},
  year={2021}
}
Large pre-trained language models have been shown to encode large amounts of world and commonsense knowledge in their parameters, leading to substantial interest in methods for extracting that knowledge. In past work, knowledge was extracted by taking manually-authored queries and gathering paraphrases for them using a separate pipeline. In this work, we propose a method for automatically rewriting queries into “BERTese”, a paraphrase query that is directly optimized towards better knowledge… Expand

Figures and Tables from this paper

Factual Probing Is [MASK]: Learning vs. Learning to Recall

References

SHOWING 1-10 OF 21 REFERENCES
How can we know what language models know? Transactions of the Association for Computational Linguistics, 8:423–438
  • 2020
Language Models as Knowledge Bases?
Association for Computational Linguistics
How Can We Know What Language Models Know?
How context affects language models
  • 2020
Inducing Relational Knowledge from BERT
...
1
2
3
...