A Decomposable Attention Model for Natural Language Inference


We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subprob-lems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.

Extracted Key Phrases

2 Figures and Tables

Showing 1-10 of 11 references

ABCNN: Attentionbased convolutional neural network for modeling sentence pairs

  • Yin
  • 2016
3 Excerpts

Semantic theory

  • Jerrold J Katz
  • 1972
1 Excerpt

Koehn2009] Philipp Koehn. 2009. Statistical machine translation

Citations per Year

58 Citations

Semantic Scholar estimates that this publication has received between 44 and 88 citations based on the available data.

See our FAQ for additional information.