Corpus ID: 3178759

Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks

@article{Weston2016TowardsAQ,
  title={Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks},
  author={Jason Weston and Antoine Bordes and Sumit Chopra and Tomas Mikolov},
  journal={arXiv: Artificial Intelligence},
  year={2016}
}
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent. To measure progress towards that goal, we argue for the usefulness of a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many more. The tasks… Expand
Addressing a Question Answering Challenge by Combining Statistical Methods with Inductive Rule Learning and Reasoning
TLDR
This work presents a system that excels at all the tasks except one and demonstrates that the introduction of a reasoning module significantly improves the performance of an intelligent agent. Expand
Getting Closer to AI Complete Question Answering: A Set of Prerequisite Real Tasks
TLDR
QuAIL is presented, the first RC dataset to combine text-based, world knowledge and unanswerable questions, and to provide question type annotation that would enable diagnostics of the reasoning strategies by a given QA system. Expand
LSTMs and Dynamic Memory Networks for Human-Written Simple Question Answering
One of the larger goals of Artificial Intelligence research is to produce methods that improve natural language processing and understanding and increase the ability of agents to interact andExpand
Evaluating Prerequisite Qualities for Learning End-to-End Dialog Systems
TLDR
This work proposes a suite of new tasks that test the ability of models to answer factual questions, provide personalization, carry short conversations about the two, and finally to perform on natural dialogs from Reddit. Expand
Intelligent Question Answering System
TLDR
An Intelligent Q & A system, which takes in a fact database as input and answers questions of varied range of complexity is developed, which uses data sets from facebook bAbi tasks as knowledge base for training purposes. Expand
Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge
TLDR
A new question set, text corpus, and baselines assembled to encourage AI research in advanced question answering constitute the AI2 Reasoning Challenge (ARC), which requires far more powerful knowledge and reasoning than previous challenges such as SQuAD or SNLI. Expand
Interactive Language Learning by Question Answering
TLDR
This work proposes and evaluates a set of baseline models for the QAit task that includes deep reinforcement learning agents, and shows that the task presents a major challenge for machine reading systems, while humans solve it with relative ease. Expand
Learning to Query, Reason, and Answer Questions On Ambiguous Texts
TLDR
Standard and improved reinforcement learning based memory-network architectures are used to solve QRAQ problems in the difficult setting where the reward signal only tells the Agent if its final answer to the challenge question is correct or not, and to provide an upper-bound to the RL results. Expand
A Neural Question Answering System for Basic Questions about Subroutines
TLDR
This paper designs a context-based QA system for basic questions about subroutines based on rules the authors extract from recent empirical studies, and trains a custom neural QA model with this dataset and evaluates the model in a study with professional programmers. Expand
Identifying facts for chatbot's question answering via sequence labelling using recurrent neural networks
TLDR
This study presented attention-based architecture for sequence labelling on deep recurrent neural network (DRNN) and showed that the proposed model provides consistent improvement and outperform then traditional approaches. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 49 REFERENCES
MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text
TLDR
MCTest is presented, a freely available set of stories and associated questions intended for research on the machine comprehension of text that requires machines to answer multiple-choice reading comprehension questions about fictional stories, directly tackling the high-level goal of open-domain machine comprehension. Expand
Large-scale Simple Question Answering with Memory Networks
TLDR
This paper studies the impact of multitask and transfer learning for simple question answering; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. Expand
Open question answering over curated and extracted knowledge bases
TLDR
This paper presents OQA, the first approach to leverage both curated and extracted KBs, and demonstrates that it achieves up to twice the precision and recall of a state-of-the-art Open QA system. Expand
Towards Neural Network-based Reasoning
TLDR
The empirical studies show that Neural Reasoner can outperform existing neural reasoning systems with remarkable margins on two difficult artificial tasks (Positional Reasoning and Path Finding) proposed in [8]. Expand
Memory Networks
TLDR
This work describes a new class of learning models called memory networks, which reason with inference components combined with a long-term memory component; they learn how to use these jointly. Expand
Paraphrase-Driven Learning for Open Question Answering
TLDR
This work demonstrates that it is possible to learn a semantic lexicon and linear ranking function without manually annotating questions and automatically generalizes a seed lexicon, and includes a scalable, parallelized perceptron parameter estimation scheme. Expand
Semantic Parsing on Freebase from Question-Answer Pairs
TLDR
This paper trains a semantic parser that scales up to Freebase and outperforms their state-of-the-art parser on the dataset of Cai and Yates (2013), despite not having annotated logical forms. Expand
Ask Me Anything: Dynamic Memory Networks for Natural Language Processing
TLDR
The dynamic memory network (DMN), a neural network architecture which processes input sequences and questions, forms episodic memories, and generates relevant answers, is introduced. Expand
Teaching Machines to Read and Comprehend
TLDR
A new methodology is defined that resolves this bottleneck and provides large scale supervised reading comprehension data that allows a class of attention based deep neural networks that learn to read real documents and answer complex questions with minimal prior knowledge of language structure to be developed. Expand
Learning Dependency-Based Compositional Semantics
TLDR
A new semantic formalism, dependency-based compositional semantics (DCS) is developed and a log-linear distribution over DCS logical forms is defined and it is shown that the system obtains comparable accuracies to even state-of-the-art systems that do require annotated logical forms. Expand
...
1
2
3
4
5
...