Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases

@inproceedings{Cao2021KnowledgeableOE,
  title={Knowledgeable or Educated Guess? Revisiting Language Models as Knowledge Bases},
  author={Boxi Cao and Hongyu Lin and Xianpei Han and Le Sun and Lingyong Yan and M. Liao and Tong Xue and Jin Xu},
  booktitle={ACL/IJCNLP},
  year={2021}
}
Previous literatures show that pre-trained masked language models (MLMs) such as BERT can achieve competitive factual knowledge extraction performance on some datasets, indicating that MLMs can potentially be a reliable knowledge source. In this paper, we conduct a rigorous study to explore the underlying predicting mechanisms of MLMs over different extraction paradigms. By investigating the behaviors of MLMs, we find that previous decent performance mainly owes to the biased prompts which… Expand
Do Prompt-Based Models Really Understand the Meaning of their Prompts?
Recently, a boom of papers have shown extraordinary progress in few-shot learning with various prompt-based models. Such success can give the impression that prompts help models to learn faster inExpand
Relational world knowledge representation in contextual language models: A review
TLDR
It is concluded that LMs and KBs are complementary representation tools, as KBs provide a high standard of factual precision which can in turn be flexibly and expressively modeled by LMs, and provide suggestions for future research in this direction. Expand
Can Language Models be Biomedical Knowledge Bases?
  • Mujeen Sung, Jinhyuk Lee, Sean Yi, Minji Jeon, Sungdong Kim, Jaewoo Kang
  • Computer Science
  • 2021
Pre-trained language models (LMs) have become ubiquitous in solving various natural language processing (NLP) tasks. There has been increasing interest in what knowledge these LMs contain and how weExpand
Language Models as a Knowledge Source for Cognitive Agents
  • Robert E. Wray, III, James R. Kirk, J. Laird
  • Computer Science
  • 2021
Language models (LMs) are sentence-completion engines trained on massive corpora. LMs have emerged as a significant breakthrough in natural-language processing, providing capabilities that go farExpand

References

SHOWING 1-10 OF 49 REFERENCES
Eliciting Knowledge from Language Models Using Automatically Generated Prompts
The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining. Reformulating tasks as fill-in-the-blanks problemsExpand
Commonsense Knowledge Mining from Pretrained Models
TLDR
This work develops a method for generating commonsense knowledge using a large, pre-trained bidirectional language model that can be used to rank a triple’s validity by the estimated pointwise mutual information between the two entities. Expand
Language Models as Knowledge Bases?
TLDR
An in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models finds that BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge. Expand
How Context Affects Language Models' Factual Predictions
TLDR
This paper reports that augmenting pre-trained language models in this way dramatically improves performance and that the resulting system, despite being unsupervised, is competitive with a supervised machine reading baseline. Expand
Investigating BERT’s Knowledge of Language: Five Analysis Methods with NPIs
TLDR
It is concluded that a variety of methods is necessary to reveal all relevant aspects of a model’s grammatical knowledge in a given domain. Expand
Pre-training Is (Almost) All You Need: An Application to Commonsense Reasoning
TLDR
This paper introduces a new scoring method that casts a plausibility ranking task in a full-text format and leverages the masked language modeling head tuned during the pre-training phase and requires less annotated data than the standard classifier approach to reach equivalent performances. Expand
Inducing Relational Knowledge from BERT
TLDR
This work proposes a methodology for distilling relational knowledge from a pre-trained language model that fine-tune a language model to predict whether a given word pair is likely to be an instance of some relation, when given an instantiated template for that relation as input. Expand
Language Models are Open Knowledge Graphs
TLDR
This paper shows how to construct knowledge graphs (KGs) from pre-trained language models (e.g., BERT, GPT-2/3), without human supervision, and proposes an unsupervised method to cast the knowledge contained within language models into KGs. Expand
What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models
TLDR
A suite of diagnostics drawn from human language experiments are introduced, which allow us to ask targeted questions about information used by language models for generating predictions in context, and the popular BERT model is applied. Expand
Benchmarking Knowledge-Enhanced Commonsense Question Answering via Knowledge-to-Text Transformation
TLDR
This work benchmarked knowledge-enhanced CQA by conducting extensive experiments on multiple standard C QA datasets using a simple and effective knowledgeto-text transformation framework and shows that context-sensitive knowledge selection, heterogeneous knowledge exploitation, and commonsense-rich language models are promising CZA directions. Expand
...
1
2
3
4
5
...