• Corpus ID: 235166723

PTR: Prompt Tuning with Rules for Text Classification

@article{Han2021PTRPT,
  title={PTR: Prompt Tuning with Rules for Text Classification},
  author={Xu Han and Weilin Zhao and Ning Ding and Zhiyuan Liu and Maosong Sun},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.11259}
}
Fine-tuned pre-trained language models (PLMs) have achieved awesome performance on almost all NLP tasks. By using additional prompts to fine-tune PLMs, we can further stimulate the rich knowledge distributed in PLMs to better serve downstream task. Prompt tuning has achieved promising results on some few-class classification tasks such as sentiment classification and natural language inference. However, manually designing lots of language prompts is cumbersome and fallible. For those auto… 

Figures and Tables from this paper

Relation Extraction as Open-book Examination: Retrieval-enhanced Prompt Tuning
TLDR
This work regards RE as an open-book examination and proposes a new semi-parametric paradigm of retrieval-enhanced prompt tuning for relation extraction, which not only infers relation through knowledge stored in the weights during training but also assists decision-making by unwinding and querying examples in the open- book datastore.
EPPAC: Entity Pre-typing Relation Classification with Prompt AnswerCentralizing
TLDR
A novel paradigm, Entity Pre-typing Relation Classification with Prompt Answer Centralizing (EPPAC)1 is proposed in this paper and outperformed state-ofthe-art approaches on TACRED and TACREV by 14.4% and 11.1%, respectively.
A Survey of Knowledge Enhanced Pre-trained Models
TLDR
A comprehensive overview of KEPTMs in NLP and CV is provided and the progress of pre-trained models and knowledge representation learning is introduced.
Why only Micro-F1? Class Weighting of Measures for Relation Classification
TLDR
This work introduces a framework for weighting schemes, where existing schemes are extremes, and two new intermediate schemes, and shows that reporting results of different weighting scheme better highlights strengths and weaknesses of a model.
KECP: Knowledge Enhanced Contrastive Prompting for Few-shot Extractive Question Answering
TLDR
A seminal paradigm for EQA is introduced that transform the task into a non-autoregressive Masked Language Modeling (MLM) generation problem and rich semantics from the external knowledge base and the passage context are support for en-hancing the representations of the query.
Decorate the Examples: A Simple Method of Prompt Design for Biomedical Relation Extraction
TLDR
This paper presents a simple yet effective method to systematically generate comprehensive prompts that reformulate the relation extraction task as a cloze-test task under a simple prompt formulation, and demonstrates the potential of the methods in such a domain-specific relation extractiontask.
ZeroPrompt: Scaling Prompt-Based Pretraining to 1, 000 Tasks Improves Zero-Shot Generalization
TLDR
The results show that task scaling can substantially improve training efficiency by 30 times in FLOPs, and a prompting method that incorporates a genetic algorithm to automatically search for the best prompt for unseen tasks, along with a few other improvements.
Recent Advances in Natural Language Processing via Large Pre-Trained Language Models: A Survey
TLDR
A survey of recent work that uses large, pre-trained transformer-based language models to solve NLP tasks via pre-training then fine-tuning, prompting, or text generation approaches.
SentiPrompt: Sentiment Knowledge Enhanced Prompt-Tuning for Aspect-Based Sentiment Analysis
TLDR
SentiPrompt is proposed to use sentiment knowledge enhanced prompts to tune the language model in the unified framework and inject sentiment knowledge regarding aspects, opinions, and polarities into prompt and explicitly model term relations via constructing consistency and polarity judgment templates from the ground truth triplets.
OpenPrompt: An Open-source Framework for Prompt-learning
TLDR
Open- Prompt is a unified easy-to-use toolkit to conduct prompt-learning over PLMs equipped with efficiency, modularity, and extendibility, and its combinability allows the freedom to combine different PLMs, task for- mats, and prompting modules in a unified paradigm.
...
...

References

SHOWING 1-10 OF 42 REFERENCES
Re-TACRED: Addressing Shortcomings of the TACRED Dataset
TLDR
Re-TACRED is a new and completely re-annotated version of the TACRED dataset that can be used to perform reliable evaluation of relation extraction models and helps uncover stronger relationships between the different models.
TACRED Revisited: A Thorough Evaluation of the TACRED Relation Extraction Task
TLDR
This paper first validate the most challenging 5K examples in the development and test sets using trained annotators and finds that label errors account for 8% absolute F1 test error, and that more than 50% of the examples need to be relabeled.
Language Models as Knowledge Bases?
TLDR
An in-depth analysis of the relational knowledge already present (without fine-tuning) in a wide range of state-of-the-art pretrained language models finds that BERT contains relational knowledge competitive with traditional NLP methods that have some access to oracle knowledge.
Improving Language Understanding by Generative Pre-Training
TLDR
The general task-agnostic model outperforms discriminatively trained models that use architectures specifically crafted for each task, improving upon the state of the art in 9 out of the 12 tasks studied.
The Power of Scale for Parameter-Efficient Prompt Tuning
TLDR
This work explores “prompt tuning”, a simple yet effective mechanism for learning “soft prompts” to condition frozen language models to perform specific downstream tasks, and shows that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer, as compared to full model tuning.
AdaPrompt: Adaptive Prompt-based Finetuning for Relation Extraction
TLDR
An adaptive label words selection mechanism that scatters the relation label into variable number of label tokens to handle the complex multiple label space and introduces an auxiliary entity discriminator object to encourage the model to focus on context representation learning.
Prototypical Representation Learning for Relation Extraction
TLDR
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data that are effective in different settings, including supervised, distantly supervised, and few-shot learning.
GPT Understands, Too
TLDR
It is shown that GPTs can be better than or comparable to similar-sized BERTs on NLU tasks with a novel method P-tuning— which employs trainable continuous prompt embeddings and outperforms the state-of-the-art approaches on the few-shot SuperGlue benchmark.
An Improved Baseline for Sentence-level Relation Extraction
TLDR
An improved RE baseline model, incorporated with entity representations with typed markers, achieves an F1 of 74.6% on TACRED, significantly outperforms previous SOTA methods and is released to the community for future research.
Prefix-Tuning: Optimizing Continuous Prompts for Generation
TLDR
Prefix-tuning is proposed, a lightweight alternative to fine- Tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous task-specific vectors, which is called the prefix.
...
...