• Publications
  • Influence
Recurrent Neural Network for Text Classification with Multi-Task Learning
TLDR
This paper uses the multi-task learning framework to jointly learn across multiple related tasks based on recurrent neural network to propose three different mechanisms of sharing information to model text with task-specific and shared layers. Expand
Adversarial Multi-task Learning for Text Classification
TLDR
This paper proposes an adversarial multi-task learning framework, alleviating the shared and private latent feature spaces from interfering with each other, and conducts extensive experiments on 16 different text classification tasks, which demonstrates the benefits of the approach. Expand
Long Short-Term Memory Neural Networks for Chinese Word Segmentation
TLDR
A novel neural network model for Chinese word segmentation is proposed, which adopts the long short-term memory (LSTM) neural network to keep the previous important information in memory cell and avoids the limit of window size of local context. Expand
Gated Recursive Neural Network for Chinese Word Segmentation
TLDR
A gated recursive neural network (GRNN) for Chinese word segmentation is proposed, which contains reset and update gates to incorporate the complicated combinations of the context characters. Expand
Convolutional Neural Tensor Network Architecture for Community-Based Question Answering
TLDR
This paper proposes a convolutional neural tensor network architecture to encode the sentences in semantic space and model their interactions with a tensor layer, which outperforms the other methods on two matching tasks. Expand
How to Fine-Tune BERT for Text Classification?
TLDR
A general solution for BERT fine-tuning is provided and new state-of-the-art results on eight widely-studied text classification datasets are obtained. Expand
End-to-End Neural Sentence Ordering Using Pointer Network
TLDR
An end-to-end neural approach to address the sentence ordering problem, which uses the pointer network (Ptr-Net) to alleviate the error propagation problem and utilize the whole contextual information is proposed. Expand
GlossBERT: BERT for Word Sense Disambiguation with Gloss Knowledge
TLDR
This paper construct context-gloss pairs and propose three BERT-based models for WSD, which fine-tune the pre-trained BERT model on SemCor3.0 training corpus and show that the approach outperforms the state-of-the-art systems. Expand
Keyphrase Extraction Using Deep Recurrent Neural Networks on Twitter
TLDR
A novel deep recurrent neural network model is proposed to combine keywords and context information to perform keyphrases from tweets and the experimental results showed that the proposed method performs significantly better than previous methods. Expand
Extractive Summarization as Text Matching
TLDR
This paper forms the extractive summarization task as a semantic text matching problem, in which a source document and candidate summaries will be matched in a semantic space to create a semantic matching framework. Expand
...
1
2
3
4
5
...