Corpus ID: 9605730

Large-scale Simple Question Answering with Memory Networks

@article{Bordes2015LargescaleSQ,
  title={Large-scale Simple Question Answering with Memory Networks},
  author={Antoine Bordes and Nicolas Usunier and Sumit Chopra and Jason Weston},
  journal={ArXiv},
  year={2015},
  volume={abs/1506.02075}
}
Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for simple question answering; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that… Expand
LSTMs and Dynamic Memory Networks for Human-Written Simple Question Answering
One of the larger goals of Artificial Intelligence research is to produce methods that improve natural language processing and understanding and increase the ability of agents to interact andExpand
Question Answering over Knowledge Base using Factual Memory Networks
TLDR
Factual Memory Network is introduced, which learns to answer questions by extracting and reasoning over relevant facts from a Knowledge Base, and improves the run-time efficiency of the model using various computational heuristics. Expand
R3: Reinforced Reader-Ranker for Open-Domain Question Answering
TLDR
A new pipeline for open-domain QA with a Ranker component, which learns to rank retrieved passages in terms of likelihood of generating the ground-truth answer to a given question, and a novel method that jointly trains the Ranker along with an answer-generation Reader model, based on reinforcement learning. Expand
Simple and Effective Question Answering with Recurrent Neural Networks
TLDR
This work formulates the first-order factoid question answering task as two machine learning problems: detecting the entities in the question, and classifying the question as one of the relation types in the KB. Expand
Question Answering with Dynamic Memory Networks from Knowledge Encoded in Natural Language
Research has been done on the use of various memory network [1] models for question answering. However, as far as our reviews went, there has been limited investigation on the use of such models withExpand
R3: Reinforced Ranker-Reader for Open-Domain Question Answering
TLDR
This paper proposes a new pipeline for open-domain QA with a Ranker component, which learns to rank retrieved passages in terms of likelihood of extracting the ground-truth answer to a given question, and proposes a novel method that jointly trains the Ranker along with an answer-extraction Reader model, based on reinforcement learning. Expand
No Need to Pay Attention: Simple Recurrent Neural Networks Work!
TLDR
This work formulates the task as two machine learning problems: detecting the entities in the question, and classifying the question as one of the relation types in the KB, and trains a recurrent neural network to solve each problem. Expand
An empirical analysis of existing systems and datasets toward general simple question answering
TLDR
This analysis, including shifting of training and test datasets and training on a union of the datasets, suggests that the progress in solving SimpleQuestions dataset does not indicate the success of more general simple question answering. Expand
Strong Baselines for Simple Question Answering over Knowledge Graphs with and without Neural Networks
TLDR
The problem of question answering over knowledge graphs is examined, focusing on simple questions that can be answered by the lookup of a single fact, and basic LSTMs and GRUs plus a few heuristics yield accuracies that approach the state of the art. Expand
Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks
TLDR
This work argues for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering, and classify these tasks into skill sets so that researchers can identify (and then rectify) the failings of their systems. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 25 REFERENCES
Memory Networks
TLDR
This work describes a new class of learning models called memory networks, which reason with inference components combined with a long-term memory component; they learn how to use these jointly. Expand
Weakly Supervised Memory Networks
TLDR
This paper introduces a variant of Memory Networks that needs significantly less supervision to perform question and answering tasks and applies it to the synthetic bAbI tasks, showing that the approach is competitive with the supervised approach, particularly when trained on a sufficiently large amount of data. Expand
Paraphrase-Driven Learning for Open Question Answering
TLDR
This work demonstrates that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions and automatically generalizes a seed lexicon, and includes a scalable, parallelized perceptron parameter estimation scheme. Expand
Open Question Answering with Weakly Supervised Embedding Models
TLDR
This paper empirically demonstrate that the model can capture meaningful signals from its noisy supervision leading to major improvements over paralex, the only existing method able to be trained on similar weakly labeled data. Expand
Semantic Parsing on Freebase from Question-Answer Pairs
TLDR
This paper trains a semantic parser that scales up to Freebase and outperforms their state-of-the-art parser on the dataset of Cai and Yates (2013), despite not having annotated logical forms. Expand
Question Answering with Subgraph Embeddings
TLDR
A system which learns to answer questions on a broad range of topics from a knowledge base using few hand-crafted features, using low-dimensional embeddings of words and knowledge base constituents to score natural language questions against candidate answers. Expand
End-To-End Memory Networks
TLDR
A neural network with a recurrent attention model over a possibly large external memory that is trained end-to-end, and hence requires significantly less supervision during training, making it more generally applicable in realistic settings. Expand
Web question answering: is more always better?
TLDR
This paper describes a question answering system that is designed to capitalize on the tremendous amount of data that is now available online, and uses the redundancy available in large corpora as an important resource to simplify the query rewrites and support answer mining from returned snippets. Expand
Open question answering over curated and extracted knowledge bases
TLDR
This paper presents OQA, the first approach to leverage both curated and extracted KBs, and demonstrates that it achieves up to twice the precision and recall of a state-of-the-art Open QA system. Expand
Joint Relational Embeddings for Knowledge-based Question Answering
TLDR
This paper proposes a novel embedding-based approach that maps NL-questions into LFs for KBQA by leveraging semantic associations between lexical representations and KBproperties in the latent space. Expand
...
1
2
3
...