Memory Networks

Abstract

We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.

View Slides

Extracted Key Phrases

7 Figures and Tables

050100150201520162017
Citations per Year

233 Citations

Semantic Scholar estimates that this publication has 233 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Weston2014MemoryN, title={Memory Networks}, author={Jason Weston and Sumit Chopra and Antoine Bordes}, journal={CoRR}, year={2014}, volume={abs/1410.3916} }