A Decomposable Attention Model for Natural Language Inference

@inproceedings{Parikh2016ADA,
  title={A Decomposable Attention Model for Natural Language Inference},
  author={Ankur P. Parikh and Oscar T{\"a}ckstr{\"o}m and Dipanjan Das and Jakob Uszkoreit},
  booktitle={EMNLP},
  year={2016}
}
We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order… CONTINUE READING

Figures, Tables, and Topics from this paper.

Explore Further: Topics Discussed in This Paper

Citations

Publications citing this paper.
SHOWING 1-10 OF 303 CITATIONS, ESTIMATED 39% COVERAGE

FILTER CITATIONS BY YEAR

2016
2019

CITATION STATISTICS

  • 62 Highly Influenced Citations

  • Averaged 85 Citations per year over the last 3 years

  • 48% Increase in citations per year in 2018 over 2017

References

Publications referenced by this paper.
SHOWING 1-10 OF 30 REFERENCES

Similar Papers

Loading similar papers…