• Publications
  • Influence
Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification
TLDR
We present a method that learns word embedding for Twitter sentiment classification in this paper, which encodes sentiment information in the continuous representation of words. Expand
  • 902
  • 81
  • PDF
Gated Self-Matching Networks for Reading Comprehension and Question Answering
TLDR
In this paper, we present the gated self-matching networks for reading comprehension style question answering which aims to answer questions from a given passage. Expand
  • 450
  • 80
  • PDF
Unified Language Model Pre-training for Natural Language Understanding and Generation
TLDR
This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks. Expand
  • 277
  • 50
  • PDF
Adaptive Recursive Neural Network for Target-dependent Twitter Sentiment Classification
TLDR
We propose Adaptive Recursive Neural Network (AdaRNN) for target-dependent Twitter sentiment classification. Expand
  • 379
  • 46
  • PDF
Neural Question Generation from Text: A Preliminary Study
TLDR
We propose to apply neural encoder-decoder model to generate answer focused questions based on natural language sentences. Expand
  • 145
  • 43
  • PDF
Topic sentiment analysis in twitter: a graph-based hashtag sentiment classification approach
Twitter is one of the biggest platforms where massive instant messages (i.e. tweets) are published every day. Users tend to express their real feelings freely in Twitter, which makes it an idealExpand
  • 367
  • 30
  • PDF
Selective Encoding for Abstractive Sentence Summarization
TLDR
We propose a selective encoding model to extend the sequence-to-sequence framework for abstractive sentence summarization. Expand
  • 166
  • 29
  • PDF
Recognizing Named Entities in Tweets
TLDR
We propose to combine a K-Nearest Neighbors classifier with a linear Conditional Random Fields (CRF) model under a semi-supervised learning framework to tackle these challenges. Expand
  • 392
  • 26
  • PDF
VL-BERT: Pre-training of Generic Visual-Linguistic Representations
TLDR
We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short). Expand
  • 152
  • 23
  • PDF
HIBERT: Document Level Pre-training of Hierarchical Bidirectional Transformers for Document Summarization
TLDR
We propose a hierarchical transformer sentence encoder for document summarization and a method to pre-train it using unlabeled data. Expand
  • 80
  • 21
  • PDF