• Publications
  • Influence
Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning
TLDR
We propose Seq2SQL, a deep neural network for translating natural language questions to corresponding SQL queries. Expand
  • 356
  • 124
Ask Me Anything: Dynamic Memory Networks for Natural Language Processing
TLDR
We introduce the dynamic memory network (DMN), a neural network architecture which processes input sequences and questions, forms episodic memories, and generates relevant answers. Expand
  • 860
  • 86
  • PDF
Dynamic Coattention Networks For Question Answering
TLDR
We introduce the Dynamic Coattention Network (DCN), an end-to-end neural network architecture for question answering. Expand
  • 486
  • 81
  • PDF
Position-aware Attention and Supervised Data Improve Slot Filling
TLDR
We combine an LSTM sequence model with a form of entity position-aware attention that is better suited to relation extraction. Expand
  • 222
  • 57
  • PDF
Multi-hop Reading Comprehension through Question Decomposition and Rescoring
TLDR
We propose a system for multi-hop reading comprehension that decomposes a compositional question into simpler sub-questions that can be answered by off-the-shelf single-hop RC models. Expand
  • 50
  • 15
  • PDF
DCN+: Mixed Objective and Deep Residual Coattention for Question Answering
TLDR
We propose a mixed objective that combines cross entropy loss with self-critical policy learning. Expand
  • 76
  • 10
  • PDF
Efficient and Robust Question Answering from Minimal Context over Documents
TLDR
This paper, we study the minimal context required to answer the question, and find that most questions in existing datasets can be answered with a small set of sentences. Expand
  • 102
  • 9
  • PDF
Global-Locally Self-Attentive Dialogue State Tracker
TLDR
We propose the Global-Locally Self-Attentive Dialogue State Tracker (GLAD), which learns representations of the user utterance and previous system actions with global-local modules. Expand
  • 47
  • 9
  • PDF
Bootstrapped Self Training for Knowledge Base Population
TLDR
We propose bootstrapped selftraining to capture the benefits of both systems: the precision of patterns and the generalizability of trained models. Expand
  • 29
  • 8
  • PDF
Coarse-grain Fine-grain Coattention Network for Multi-evidence Question Answering
TLDR
We propose the Coarse-grain Fine-grain Coattention Network (CFC), a new question answering model that combines information from evidence across multiple documents. Expand
  • 34
  • 6
  • PDF