Corpus ID: 230433941

Prefix-Tuning: Optimizing Continuous Prompts for Generation

@article{Li2021PrefixTuningOC,
  title={Prefix-Tuning: Optimizing Continuous Prompts for Generation},
  author={Xiang Lisa Li and Percy Liang},
  journal={ArXiv},
  year={2021},
  volume={abs/2101.00190}
}
Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks. However, it modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen, but optimizes a small continuous task-specific vector (called the prefix). Prefix-tuning draws inspiration… Expand
PTR: Prompt Tuning with Rules for Text Classification
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
COMPACTER: Efficient Low-Rank Hypercomplex Adapter Layers
Structural Adapters in Pretrained Language Models for AMR-to-text Generation
Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm
Go Forth and Prosper: Language Modeling with Ancient Textual History
Constrained Language Models Yield Few-Shot Semantic Parsers
...
1
2
...

References

SHOWING 1-10 OF 51 REFERENCES
How fine can fine-tuning be? Learning efficient language models
Parameter-Efficient Transfer Learning for NLP
Incorporating BERT into Neural Machine Translation
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Text-to-Text Pre-Training for Data-to-Text Tasks
Plug and Play Language Models: A Simple Approach to Controlled Text Generation
Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning
CTRL: A Conditional Transformer Language Model for Controllable Generation
Text Summarization with Pretrained Encoders
...
1
2
3
4
5
...