MetaPrompting: Learning to Learn Better Prompts

@article{Hou2022MetaPromptingLT,
  title={MetaPrompting: Learning to Learn Better Prompts},
  author={Yutai Hou and Hongyuan Dong and Xinghao Wang and Bohan Li and Wanxiang Che},
  journal={ArXiv},
  year={2022},
  volume={abs/2209.11486}
}
Prompting method is regarded as one of the crucial progress for few-shot nature language processing. Recent research on prompting moves from discrete tokens based “hard prompts” to continuous “soft prompts”, which employ learnable vectors as pseudo prompt tokens and achieve better performance. Though showing promising prospects, these soft-prompting methods are observed to rely heavily on good initialization to take effect. Unfortunately, obtaining a perfect initialization for soft prompts… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 52 REFERENCES

Knowledge-Aware Meta-learning for Low-Resource Text Classification

KGML is proposed to introduce additional representation for each sentence learned from the extracted sentence-specific knowledge graph to bridge the gap between meta-training and meta-testing tasks by leveraging the external knowledge bases.

Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections

Meta-tuning is proposed, which directly optimizes the zero-shot learning objective by finetuning pre-trained language models on a collection of datasets by aggregating 43 existing datasets and annotating 441 label descriptions in a question-answering (QA) format.

Few-shot Text Classification with Distributional Signatures

This paper demonstrates that this model consistently outperforms prototypical networks learned on lexical knowledge in both few-shot text classification and relation classification by a significant margin across six benchmark datasets.

How to train your MAML

This paper proposes various modifications to MAML that not only stabilize the system, but also substantially improve the generalization performance, convergence speed and computational overhead of MAMl, which it is called M AML++.

On First-Order Meta-Learning Algorithms

A family of algorithms for learning a parameter initialization that can be fine-tuned quickly on a new task, using only first-order derivatives for the meta-learning updates, including Reptile, which works by repeatedly sampling a task, training on it, and moving the initialization towards the trained weights on that task.

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks

We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning

News Category Dataset

A News Category Dataset that contains around 200k news headlines from the year 2012 to 2018 from HuffPost, along with useful metadata to enable various NLP tasks is presented and some novel insights from the dataset are produced.

GPT Understands, Too

It is shown that GPTs can be better than or comparable to similar-sized BERTs on NLU tasks with a novel method P-tuning— which employs trainable continuous prompt embeddings and outperforms the state-of-the-art approaches on the few-shot SuperGlue benchmark.

Prefix-Tuning: Optimizing Continuous Prompts for Generation

Prefix-tuning is proposed, a lightweight alternative to fine- Tuning for natural language generation tasks, which keeps language model parameters frozen and instead optimizes a sequence of continuous task-specific vectors, which is called the prefix.

Making Pre-trained Language Models Better Few-shot Learners

The LM-BFF approach makes minimal assumptions on task resources and domain expertise, and hence constitutes a strong task-agnostic method for few-shot learning.
...