Corpus ID: 212747830

Pre-trained Models for Natural Language Processing: A Survey

@article{Qiu2020PretrainedMF,
  title={Pre-trained Models for Natural Language Processing: A Survey},
  author={Xipeng Qiu and Tianxiang Sun and Yige Xu and Yunfan Shao and Ning Dai and Xuanjing Huang},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.08271}
}
  • Xipeng Qiu, Tianxiang Sun, +3 authors Xuanjing Huang
  • Published in ArXiv 2020
  • Computer Science
  • Recently, the emergence of pre-trained models (PTMs) has brought natural language processing (NLP) to a new era. In this survey, we provide a comprehensive review of PTMs for NLP. We first briefly introduce language representation learning and its research progress. Then we systematically categorize existing PTMs based on a taxonomy with four perspectives. Next, we describe how to adapt the knowledge of PTMs to the downstream tasks. Finally, we outline some potential directions of PTMs for… CONTINUE READING

    Figures, Tables, and Topics from this paper.

    Citations

    Publications citing this paper.

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 242 REFERENCES

    Cross-lingual Language Model Pretraining

    VIEW 8 EXCERPTS
    HIGHLY INFLUENTIAL

    K-BERT: Enabling Language Representation with Knowledge Graph

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    KEPLER: A unified model for knowledge embedding and pre-trained language representa- QIU XP, et al. Pre-trained Models for Natural Language Processing: A Survey

    • Xiaozhi Wang, Tianyu Gao, +3 authors Jian Tang
    • arXiv preprint arXiv:1911.06136,
    • 2019
    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    RoBERTa: A Robustly Optimized BERT Pretraining Approach

    VIEW 10 EXCERPTS
    HIGHLY INFLUENTIAL