• Publications
  • Influence
Modeling Coverage for Neural Machine Translation
TLDR
Attention mechanism has enhanced state-of-the-art Neural Machine Translation (NMT) by jointly learning to align and translate. Expand
  • 481
  • 50
  • PDF
Improving the Transformer Translation Model with Document-Level Context
TLDR
We extend the Transformer translation model with a new context encoder to represent document-level context, which is then incorporated into the original encoder and decoder. Expand
  • 86
  • 31
  • PDF
Neural Network Methods for Natural Language Processing
TLDR
Neural networks are a family of powerful machine learning models. Expand
  • 336
  • 30
  • PDF
MobileFaceNets: Efficient CNNs for Accurate Real-time Face Verification on Mobile Devices
TLDR
We present a class of extremely efficient CNN models, MobileFaceNets, which use less than 1 million parameters and are specifically tailored for high-accuracy real-time face verification on mobile and embedded devices. Expand
  • 97
  • 29
  • PDF
graph2vec: Learning Distributed Representations of Graphs
TLDR
We propose a neural embedding framework named graph2vec to learn data-driven distributed representations of arbitrary sized graphs. Expand
  • 146
  • 26
  • PDF
Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention
TLDR
In this paper, we proposed a unified deep learning framework for recognizing textual entailment which dose not require any feature engineering or external resources. Expand
  • 186
  • 24
  • PDF
Early Detection of Fake News on Social Media Through Propagation Path Classification with Recurrent and Convolutional Networks
TLDR
EARLY DETECTION OF FAKE NEWS ON SOCIAL MEDIA based on the user characteristics of its spreaders. Expand
  • 113
  • 18
Transductive Unbiased Embedding for Zero-Shot Learning
TLDR
We propose a straightforward yet effective method named Quasi-Fully Supervised Learning (QFSL) to alleviate the bias problem. Expand
  • 87
  • 18
  • PDF
Learning to Remember Translation History with a Continuous Cache
TLDR
We propose to augment NMT models with a very light-weight cache-like memory network, which stores recent hidden representations as translation history. Expand
  • 82
  • 18
  • PDF
Neural Machine Translation with Reconstruction
TLDR
We propose a novel encoder-decoder-reconstructor framework for NMT, which improves the adequacy of NMT output and achieves superior translation result over state-of-the-art NMT and statistical MT systems. Expand
  • 145
  • 17
  • PDF