P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks
@article{Liu2021PTuningVP, title={P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks}, author={Xiao Liu and Kaixuan Ji and Yicheng Fu and Zhengxiao Du and Zhilin Yang and Jie Tang}, journal={ArXiv}, year={2021}, volume={abs/2110.07602} }
Prompt tuning, which only tunes continuous prompts with a frozen language model, substantially reduces per-task storage and memory usage at training. However, in the context of NLU, prior work and our results reveal that existing methods of prompt tuning do not perform well for normal-sized pretrained models and for hard sequence tasks, indicating lack of universality. We present a novel empirical finding that properly-optimized prompt tuning can be universally effective across a wide range of…
155 Citations
Dynamic Prompting: A Unified Framework for Prompt Tuning
- Computer Science
- 2023
Experimental results show that simple instance-level position-aware soft prompts can improve the classification accuracy of up to 6 points on average on five datasets, reducing its gap with fine-tuning and proving its universal usefulness under full-data, few-shot, and multitask regimes.
No more fine-tuning? an experimental evaluation of prompt tuning in code intelligence
- Computer ScienceESEC/SIGSOFT FSE
- 2022
Pre-trained models have been shown effective in many code intelligence tasks. These models are pre-trained on large-scale unlabeled corpus and then fine-tuned in downstream tasks. However, as the…
Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models
- Computer ScienceArXiv
- 2022
Though initially proposed as an efficient method to steer large models, some of the fascinating evidence discovered along with delta tuning could help further reveal the mechanisms of PLMs and even deep neural networks.
Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models
- Computer ScienceArXiv
- 2022
It is argued that one of the factors hindering the development of prompt-tuning on NLG tasks is the unfamiliar inputs, leading to a more effective way to adapt unfamiliar inputs to frozen PLMs.
Effectiveness of Data Augmentation for Prefix Tuning with Limited Data
- Computer Science
- 2023
It is shown that data augmentation can be used to boost the performance of prefix tuning models, but the effectiveness of each technique varies and certain methods can lead to a notable degradation in performance, particularly when using larger models and on harder tasks.
Contrastive Demonstration Tuning for Pre-trained Language Models
- Computer ScienceEMNLP
- 2022
A novel pluggable, extensible, and efficient approach named contrastive demonstration tuning, which is free of demonstration sampling is proposed, which can be plugged into any previous prompt-tuning approaches and extended to widespread classification tasks with a large number of categories.
Visual Prompt Tuning
- Computer ScienceECCV
- 2022
This paper introduces Visual Prompt Tuning as an efficient and effective alternative to full fine-tuning for large-scale Transformer models in vision and shows that VPT achieves significant performance gains compared to other parameter efficient tuning protocols.
Preserving In-Context Learning ability in Large Language Model Fine-tuning
- Computer ScienceArXiv
- 2022
ProMoT is proposed, a simple yet effective two-stage fine-tuning framework that preserves in-context abilities of the pretrained model and shows remarkable generalization ability on tasks that have different formats, e.g. natural language inference and English-French translation.
Multi-Task Pre-Training of Modular Prompt for Few-Shot Learning
- Computer ScienceArXiv
- 2022
This paper presents Multi-task Pre-trained Modular Prompt (MP2) to boost prompt tuning for few-shot learning and demonstrates that MP2 can achieve surprisingly fast and strong adaptation to downstream tasks by merely learning 8 parameters to combine the pre-trained modular prompts.
Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning
- Computer Science
- 2023
This work proposes multitask prompt tuning (MPT), which first learns a single transferable prompt by distilling knowledge from multiple task-specific source prompts, then learns multiplicative low rank updates to this shared prompt to efficiently adapt it to each downstream target task.
References
SHOWING 1-10 OF 52 REFERENCES
The Power of Scale for Parameter-Efficient Prompt Tuning
- Computer ScienceEMNLP
- 2021
This work explores “prompt tuning”, a simple yet effective mechanism for learning “soft prompts” to condition frozen language models to perform specific downstream tasks, and shows that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer, as compared to full model tuning.
Prefix-Tuning: Optimizing Continuous Prompts for Generation
- Computer ScienceACL
- 2021
Prefix-tuning is proposed, a lightweight alternative to fine- Tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous task-specific vectors, which is called the prefix.
PPT: Pre-trained Prompt Tuning for Few-shot Learning
- Computer ScienceACL
- 2022
This work proposes to pre-train prompts by adding soft prompts into the pre-training stage to obtain a better initialization, and names this Pre-trained Prompt Tuning framework “PPT” to ensure the generalization of PPT.
Making Pre-trained Language Models Better Few-shot Learners
- Computer ScienceACL
- 2021
The LM-BFF approach makes minimal assumptions on task resources and domain expertise, and hence constitutes a strong task-agnostic method for few-shot learning.
GPT Understands, Too
- Computer ScienceArXiv
- 2021
It is shown that GPTs can be better than or comparable to similar-sized BERTs on NLU tasks with a novel method P-tuning -- which employs trainable continuous prompt embeddings and outperforms the state-of-the-art approaches on the few-shot SuperGlue benchmark.
Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models
- Computer ScienceFINDINGS
- 2022
This work shows that finetuning LMs in the few-shot setting can considerably reduce the need for prompt engineering, and recommends finetuned LMs for few- shot learning as it is more accurate, robust to different prompts, and can be made nearly as efficient as using frozen LMs.
Language Models are Few-Shot Learners
- Computer ScienceNeurIPS
- 2020
GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic.
Learning How to Ask: Querying LMs with Mixtures of Soft Prompts
- Computer ScienceNAACL
- 2021
This work explores the idea of learning prompts by gradient descent—either fine-tuning prompts taken from previous work, or starting from random initialization, showing that the implicit factual knowledge in language models was previously underestimated.
FewNLU: Benchmarking State-of-the-Art Methods for Few-Shot Natural Language Understanding
- Computer ScienceACL
- 2022
This work introduces an evaluation framework that improves previous evaluation procedures in three key aspects, i.e., test performance, dev-test correlation, and stability, and re-evaluate several state-of-the-art few-shot methods for NLU tasks.
Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification
- Computer ScienceACL
- 2022
This work focuses on incorporating external knowledge into the verbalizer, forming a knowledgeable prompt Tuning (KPT), to improve and stabilize prompttuning.