• Publications
  • Influence
Gated Self-Matching Networks for Reading Comprehension and Question Answering
The gated self-matching networks for reading comprehension style question answering, which aims to answer questions from a given passage, are presented and holds the first place on the SQuAD leaderboard for both single and ensemble model. Expand
Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification
Three neural networks are developed to effectively incorporate the supervision from sentiment polarity of text (e.g. sentences or tweets) in their loss functions and the performance of SSWE is improved by concatenating SSWE with existing feature set. Expand
Unified Language Model Pre-training for Natural Language Understanding and Generation
A new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks that compares favorably with BERT on the GLUE benchmark, and the SQuAD 2.0 and CoQA question answering tasks. Expand
Adaptive Recursive Neural Network for Target-dependent Twitter Sentiment Classification
AdaRNN adaptively propagates the sentiments of words to target depending on the context and syntactic relationships between them and it is shown that AdaRNN improves the baseline methods. Expand
Target-dependent Twitter Sentiment Classification
This paper proposes to improve target-dependent Twitter sentiment classification by incorporating target- dependent features; and taking related tweets into consideration; and according to the experimental results, this approach greatly improves the performance of target- dependence sentiment classification. Expand
Neural Question Generation from Text: A Preliminary Study
A preliminary study on neural question generation from text with the SQuAD dataset is conducted, and the experiment results show that the method can produce fluent and diverse questions. Expand
Topic Aware Neural Response Generation
A topic aware sequence-to-sequence (TA-Seq2Seq) model that utilizes topics to simulate prior knowledge of human that guides them to form informative and interesting responses in conversation, and leverages the topic information in generation by a joint attention mechanism and a biased generation probability. Expand
Low-Quality Product Review Detection in Opinion Summarization
Experimental results show that the proposed approach effectively discriminates lowquality reviews from high-quality ones and enhances the task of opinion summarization by detecting and filtering low quality reviews. Expand
Mean Field Multi-Agent Reinforcement Learning
Existing multi-agent reinforcement learning methods are limited typically to a small number of agents. When the agent number increases largely, the learning becomes intractable due to the curse ofExpand
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
The pre-trained {\sc Hibert} is applied to the summarization model and it outperforms its randomly initialized counterpart by 1.25 ROUGE on the CNN/Dailymail dataset and by 2.0 RouGE on a version of New York Times dataset. Expand