• Publications
  • Influence
Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond
TLDR
This work proposes several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-to-word structure, and emitting words that are rare or unseen at training time. Expand
A Structured Self-attentive Sentence Embedding
TLDR
A new model for extracting an interpretable sentence embedding by introducing self-attention is proposed, which uses a 2-D matrix to represent the embedding, with each row of the matrix attending on a different part of the sentence. Expand
SummaRuNNer: A Recurrent Neural Network Based Sequence Model for Extractive Summarization of Documents
We present SummaRuNNer, a Recurrent Neural Network (RNN) based sequence model for extractive summarization of documents and show that it achieves performance better than or comparable toExpand
ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs
TLDR
This work presents a general Attention Based Convolutional Neural Network (ABCNN) for modeling a pair of sentences and proposes three attention schemes that integrate mutual influence between sentences into CNNs; thus, the representation of each sentence takes into consideration its counterpart. Expand
Classifying Relations by Ranking with Convolutional Neural Networks
TLDR
This work proposes a new pairwise ranking loss function that makes it easy to reduce the impact of artificial classes and shows that it is more effective than CNN followed by a softmax classifier and using only word embeddings as input features is enough to achieve state-of-the-art results. Expand
LSTM-based Deep Learning Models for non-factoid answer selection
TLDR
A general deep learning framework is applied for the answer selection task, which does not depend on manually defined features or linguistic tools, and is extended in two directions to define a more composite representation for questions and answers. Expand
Pointing the Unknown Words
TLDR
A novel way to deal with the rare and unseen words for the neural network models using attention is proposed using attention, which uses two softmax layers in order to predict the next word in conditional language models. Expand
Applying deep learning to answer selection: A study and an open task
TLDR
A general deep learning framework is applied to address the non-factoid question answering task and demonstrates superior performance compared to the baseline methods and various technologies give further improvements. Expand
Attentive Pooling Networks
TLDR
The empirical results, from three very different benchmark tasks of question answering/answer selection, demonstrate that the proposed models outperform a variety of strong baselines and achieve state-of-the-art performance in all the benchmarks. Expand
Improved Representation Learning for Question Answer Matching
TLDR
This work develops hybrid models that process the text using both convolutional and recurrent neural networks, combining the merits on extracting linguistic information from both structures to address passage answer selection. Expand
...
1
2
3
4
5
...