Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention

Abstract

In this paper, we proposed a sentence encoding-based model for recognizing text entailment. In our approach, the encoding of sentence is a two-stage process. Firstly, average pooling was used over word-level bidirectional LSTM (biLSTM) to generate a firststage sentence representation. Secondly, attention mechanism was employed to replace average pooling on the same sentence for better representations. Instead of using target sentence to attend words in source sentence, we utilized the sentence’s first-stage representation to attend words appeared in itself, which is called ”Inner-Attention” in our paper . Experiments conducted on Stanford Natural Language Inference (SNLI) Corpus has proved the effectiveness of ”Inner-Attention” mechanism. With less number of parameters, our model outperformed the existing best sentence encoding-based approach by a large margin.

Extracted Key Phrases

5 Figures and Tables

010203020162017
Citations per Year

Citation Velocity: 17

Averaging 17 citations per year over the last 2 years.

Learn more about how we calculate this metric in our FAQ.

Cite this paper

@article{Liu2016LearningNL, title={Learning Natural Language Inference using Bidirectional LSTM model and Inner-Attention}, author={Yang Liu and Chengjie Sun and Lei Lin and Xiaolong Wang}, journal={CoRR}, year={2016}, volume={abs/1605.09090} }