• Publications
  • Influence
Recurrent Neural Network for Text Classification with Multi-Task Learning
TLDR
This paper uses the multi-task learning framework to jointly learn across multiple related tasks based on recurrent neural network to propose three different mechanisms of sharing information to model text with task-specific and shared layers. Expand
Adversarial Multi-task Learning for Text Classification
TLDR
This paper proposes an adversarial multi-task learning framework, alleviating the shared and private latent feature spaces from interfering with each other, and conducts extensive experiments on 16 different text classification tasks, which demonstrates the benefits of the approach. Expand
Long Short-Term Memory Neural Networks for Chinese Word Segmentation
TLDR
A novel neural network model for Chinese word segmentation is proposed, which adopts the long short-term memory (LSTM) neural network to keep the previous important information in memory cell and avoids the limit of window size of local context. Expand
Fine-grained Opinion Mining with Recurrent Neural Networks and Word Embeddings
TLDR
This work proposes a general class of discriminative models based on recurrent neural networks and word embeddings that can be successfully applied to fine-grained opinion mining tasks without any taskspecific feature engineering effort. Expand
Implicit Discourse Relation Detection via a Deep Architecture with Gated Relevance Network
TLDR
The use of word embeddings to replace the original words is proposed to overcome the data sparsity problem, and a gated relevance network is adopted to capture the semantic interaction between word pairs. Expand
Deep Multi-Task Learning with Shared Memory
TLDR
Experiments on two groups of text classification tasks show that the proposed architectures can improve the performance of a task with the help of other related tasks. Expand
Learning Context-Sensitive Word Embeddings with Neural Tensor Skip-Gram Model
TLDR
A general architecture to learn the word and topic embeddings efficiently is presented, which is an extension to the Skip-Gram model and can model the interaction between words and topics simultaneously. Expand
Recurrent Neural Network for Text Classification with MultiTask Learning
Neural network based methods have obtained great progress on a variety of natural language processing tasks. However, in most previous works, the models are learned based on single-task supervisedExpand
Searching for Effective Neural Extractive Summarization: What Works and What’s Next
TLDR
This paper seeks to better understand how neural extractive summarization systems could benefit from different types of model architectures, transferable knowledge and learning schemas, and finds an effective way to improve the current framework. Expand
Meta Multi-Task Learning for Sequence Modeling
TLDR
A shared meta-network is used to capture the meta-knowledge of semantic composition and generate the parameters of the task-specific semantic composition models in a new sharing scheme of composition function across multiple tasks. Expand
...
1
2
3
4
5
...