• Corpus ID: 237279047

Prompt-Learning for Fine-Grained Entity Typing

@article{Ding2021PromptLearningFF,
  title={Prompt-Learning for Fine-Grained Entity Typing},
  author={Ning Ding and Yulin Chen and Xu Han and Guangwei Xu and Pengjun Xie and Haitao Zheng and Zhiyuan Liu and Juan-Zi Li and Hong-Gee Kim},
  journal={ArXiv},
  year={2021},
  volume={abs/2108.10604}
}
As an effective approach to tune pre-trained language models (PLMs) for specific tasks, prompt-learning has recently attracted much attention from researchers. By using clozestyle language prompts to stimulate the versatile knowledge of PLMs, prompt-learning can achieve promising results on a series of NLP tasks, such as natural language inference, sentiment classification, and knowledge probing. In this work, we investigate the application of prompt-learning on fine-grained entity typing in… 

Figures and Tables from this paper

Prototypical Verbalizer for Prompt-based Few-shot Tuning
TLDR
This work proposes the prototypical verbalizer (ProtoVerb) which is built directly from training data and demonstrates that ProtoVerb significantly outperforms current automatic verbalizers, especially when training data is extremely scarce.
Commonsense Knowledge-Aware Prompt Tuning for Few-Shot NOTA Relation Classification
TLDR
The commonsense knowledge-aware prompt tuning (CKPT) method is proposed, a simple and effective prompt-learning method is developed by constructing relation-oriented templates, which can further stimulate the rich knowledge distributed in PLMs to better serve downstream tasks.
Prompt-Learning for Short Text Classification
TLDR
This paper proposes a simple short text classification approach that makes use of prompt-learning based on knowledgeable expansion that outperforms the state-of-the-art by up to 6 Accuracy points on three well-known datasets.
Ultra-fine Entity Typing with Indirect Supervision from Natural Language Inference
TLDR
LITE is presented, a new approach that formulates entity typing as a natural language inference (NLI) problem, making use of the indirect supervision from NLI to infer type information meaningfully represented as textual hypotheses and alleviate the data scarcity issue.
Differentiable Prompt Makes Pre-trained Language Models Better Few-shot Learners
TLDR
A novel pluggable, extensible, and efficient approach named DifferentiAble pRompT (DART), which can convert small language models into better few-shot learners.
KnowPrompt: Knowledge-aware Prompt-tuning with Synergistic Optimization for Relation Extraction
TLDR
A Knowledge-aware Prompt-tuning approach with synergistic optimization (KnowPrompt) that injects latent knowledge contained in relation labels into prompt construction with learnable virtual type words and answer words.
Making Pre-trained Language Models Good Long-tailed Learners
TLDR
It is demonstrated that prompt-tuning exactly makes pre-trained language models at least good long-tailed learners, in comparison with the less important input structure.
Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NER
TLDR
A simple demonstration-based learning method for NER, which lets the input be prefaced by task demonstrations for in-context learning, and results on in-domain learning and domain adaptation show that the model’s performance in low-resource settings can be largely improved with a suitable demonstration strategy.
Contrastive Demonstration Tuning for Pre-trained Language Models
TLDR
Experimental results illustrate that the proposed novel pluggable, ex-tensible, and efficient approach named contrastive demonstration tuning, which is free of demonstration sampling, integrated with previous approaches LM-BFF and P-tuning can yield better performance.
LightNER: A Lightweight Generative Framework with Prompt-guided Attention for Low-resource NER
TLDR
A lightweight generative framework with prompt-guided attention for low-resource NER (LightNER), which converts sequence labeling to generate the entity pointer index sequence and entity categories without any label-specific classifiers, which can address the class transfer issue.
...
1
2
3
...

References

SHOWING 1-10 OF 46 REFERENCES
Ultra-Fine Entity Typing with Weak Supervision from a Masked Language Model
TLDR
This paper proposes to obtain training data for ultra-fine entity typing by using a BERT Masked Language Model (MLM), and constructs an input for the BERT MLM so that it predicts context dependent hypernyms of the mention which can be used as type labels.
Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification
TLDR
This work focuses on incorporating external knowledge into the verbalizer, forming a knowledgeable prompt Tuning (KPT), to improve and stabilize prompttuning.
Ultra-Fine Entity Typing
TLDR
A model that can predict ultra-fine types is presented, and is trained using a multitask objective that pools the authors' new head-word supervision with prior supervision from entity linking, and achieves state of the art performance on an existing fine-grained entity typing benchmark, and sets baselines for newly-introduced datasets.
Eliciting Knowledge from Language Models Using Automatically Generated Prompts
The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining. Reformulating tasks as fill-in-the-blanks problems
Prefix-Tuning: Optimizing Continuous Prompts for Generation
TLDR
Prefix-tuning is proposed, a lightweight alternative to fine- Tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous task-specific vectors, which is called the prefix.
AFET: Automatic Fine-Grained Entity Typing by Hierarchical Partial-Label Embedding
TLDR
This paper proposes a novel embedding method to separately model “clean” and “noisy” mentions, and incorporates the given type hierarchy to induce loss functions.
The Power of Scale for Parameter-Efficient Prompt Tuning
TLDR
This work explores “prompt tuning”, a simple yet effective mechanism for learning “soft prompts” to condition frozen language models to perform specific downstream tasks, and shows that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer, as compared to full model tuning.
Neural Architectures for Fine-grained Entity Type Classification
TLDR
This work investigates several neural network architectures for fine-grained entity type classification and establishes that the attention mechanism learns to attend over syntactic heads and the phrase containing the mention, both of which are known to be strong hand-crafted features for this task.
Language Models as Knowledge Bases?
TLDR
An in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models finds that BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge.
Improving Language Understanding by Generative Pre-Training
TLDR
The general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, improving upon the state of the art in 9 out of the 12 tasks studied.
...
1
2
3
4
5
...