Corpus ID: 222163285

Beyond Fine-tuning: Few-Sample Sentence Embedding Transfer

@inproceedings{Garg2020BeyondFF,
  title={Beyond Fine-tuning: Few-Sample Sentence Embedding Transfer},
  author={Siddhant Garg and Rohit Kumar Sharma and Yingyu Liang},
  booktitle={AACL/IJCNLP},
  year={2020}
}
  • Siddhant Garg, Rohit Kumar Sharma, Yingyu Liang
  • Published in AACL/IJCNLP 2020
  • Computer Science
  • Fine-tuning (FT) pre-trained sentence embedding models on small datasets has been shown to have limitations. In this paper we show that concatenating the embeddings from the pre-trained model with those from a simple sentence embedding model trained only on the target data, can improve over the performance of FT for few-sample tasks. To this end, a linear classifier is trained on the combined embeddings, either by freezing the embedding model weights or training the classifier and embedding… CONTINUE READING

    Figures and Tables from this paper

    References

    SHOWING 1-10 OF 31 REFERENCES
    Sentence Encoders on STILTs: Supplementary Training on Intermediate Labeled-data Tasks
    • 137
    • PDF
    Universal Language Model Fine-tuning for Text Classification
    • 1,357
    • PDF
    Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models
    • 19
    • PDF
    How to Fine-Tune BERT for Text Classification?
    • 170
    • PDF
    Parameter-Efficient Transfer Learning for NLP
    • 132
    • Highly Influential
    • PDF
    To Tune or Not To Tune? How About the Best of Both Worlds?
    • 11
    • PDF
    BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
    • 13,829
    • PDF