• Publications
  • Influence
How to Fine-Tune BERT for Text Classification?
TLDR
A general solution for BERT fine-tuning is provided and new state-of-the-art results on eight widely-studied text classification datasets are obtained. Expand
Pre-trained Models for Natural Language Processing: A Survey
TLDR
This survey is purposed to be a hands-on guide for understanding, using, and developing PTMs for various NLP tasks. Expand
Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation
TLDR
The proposed methods can significantly improve the adaption of BERT without any external data or knowledge and are effective with two effective mechanisms: self-ensemble and self-distillation. Expand
Keyphrase Generation with Fine-Grained Evaluation-Guided Reinforcement Learning
TLDR
A new fine-grained evaluation metric that considers different granularity: token-level F1 score, edit distance, duplication, and prediction quantities is proposed that can effectively ease the synonym problem and generate a higher quality prediction. Expand
One2Set: Generating Diverse Keyphrases as a Set
TLDR
This work proposes a new training paradigm ONE2SET without predefining an order to concatenate the keyphrases, and proposes a novel model that utilizes a fixed set of learned control codes as conditions to generate a set of keyphRases in parallel. Expand