Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text

@inproceedings{Sun2018OpenDQ,
  title={Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text},
  author={Haitian Sun and Bhuwan Dhingra and M. Zaheer and Kathryn Mazaitis and R. Salakhutdinov and William W. Cohen},
  booktitle={EMNLP},
  year={2018}
}
Open Domain Question Answering (QA) is evolving from complex pipelined systems to end-to-end deep neural networks. [...] Key Method We construct a suite of benchmark tasks for this problem, varying the difficulty of questions, the amount of training data, and KB completeness. We show that GRAFT-Net is competitive with the state-of-the-art when tested using either KBs or text alone, and vastly outperforms existing methods in the combined setting. Source code is available at this https URL .Expand
PullNet: Open Domain Question Answering with Iterative Retrieval on Knowledge Bases and Text
TLDR
PullNet is described, an integrated framework for learning what to retrieve and reasoning with this heterogeneous information to find the best answer in an open-domain question answering setting. Expand
A General FOFE-net Framework for Simple and Effective Question Answering over Knowledge Bases
TLDR
A simple but general neural model composed of fixed-size ordinally forgetting encoding (FOFE) and deep neural networks, called FOFE-net, is proposed to solve KB-QA problem at different stages, and experimental results show that FO FE-net performs well on KB-qA subtasks, entity discovery and linking (EDL) and relation detection, and in turn pushing overall KB- QA system to achieve strong results on all datasets. Expand
VIRTUAL KNOWLEDGE BASE
We consider the task of answering complex multi-hop questions using a corpus as a virtual knowledge base (KB). In particular, we describe a neural module, DrKIT, that traverses textual data like aExpand
Differentiable Reasoning over a Virtual Knowledge Base
TLDR
A neural module, DrKIT, that traverses textual data like a virtual KB, softly following paths of relations between mentions of entities in the corpus, which improves accuracy by 9 points on 3-hop questions in the MetaQA dataset and is very efficient. Expand
Question answering over knowledge bases with continuous learning
TLDR
This dissertation introduces NEQA, a framework for continuous learning for QA over KBs, and presents QUINT, an approach for answering natural language questions over knowledge bases using automatically learned templates. Expand
Improving Question Answering over Incomplete KBs with Knowledge-Aware Reader
TLDR
A new end-to-end question answering model, which learns to aggregate answer evidence from an incomplete knowledge base (KB) and a set of retrieved text snippets, which achieves consistent improvements across settings with different extents of KB incompleteness. Expand
Complex Question Answering on knowledge graphs using machine translation and multi-task learning
TLDR
A multi-task BERT based Neural Machine Translation (NMT) model is proposed to address question answering over a knowledge graph and the efficacy of this model is demonstrated on one publicly available and one proprietary dataset. Expand
Research on Automatic Question Answering of Generative Knowledge Graph Based on Pointer Network
TLDR
A new generative question answering method based on knowledge graph is proposed, including three parts of knowledge vocabulary construction, data pre-processing, and answer generation, which can achieve superior performance on WebQA datasets than other methods. Expand
A Survey on Complex Question Answering over Knowledge Base: Recent Advances and Challenges
TLDR
The recent advances in complex QA are introduced, the methods of these branches are described, directions for future research are analyzed and the models proposed by the Alime team are introduced. Expand
Retrieval-Based Open-Domain Question Answering: Exploring The Impact on The Retrieval Component across Datasets
TLDR
This research gap is the focus in this thesis, and error analyses of questions of different QA datasets are conducted to figure out the error types of questions that have a negative impact on the traditional IR models. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 55 REFERENCES
Question Answering on Knowledge Bases and Text using Universal Schema and Memory Networks
TLDR
Evaluation results on \spades fill-in-the-blank question answering dataset show that exploiting universal schema for question answering is better than using either a KB or text alone. Expand
R3: Reinforced Reader-Ranker for Open-Domain Question Answering
TLDR
A new pipeline for open-domain QA with a Ranker component, which learns to rank retrieved passages in terms of likelihood of generating the ground-truth answer to a given question, and a novel method that jointly trains the Ranker along with an answer-generation Reader model, based on reinforcement learning. Expand
Question Answering from Unstructured Text by Retrieval and Comprehension
TLDR
This work presents a two-step approach to question answering from unstructured text, consisting of a retrieval step and a comprehension step, featuring an RNN based attention model with a novel mixture mechanism for selecting answers from either retrieved articles or a fixed vocabulary. Expand
Simple and Effective Semi-Supervised Question Answering
TLDR
This work envisions a system where the end user specifies a set of base documents and only a few labelled examples, and exploits the document structure to create cloze-style questions from these base documents; pre-trains a powerful neural network on the cloze style questions; and further fine-tunes the model on the labeled examples. Expand
Reading Wikipedia to Answer Open-Domain Questions
TLDR
This approach combines a search component based on bigram hashing and TF-IDF matching with a multi-layer recurrent neural network model trained to detect answers in Wikipedia paragraphs, indicating that both modules are highly competitive with respect to existing counterparts. Expand
Weaver: Deep Co-Encoding of Questions and Documents for Machine Reading
TLDR
The Weaver model is introduced that uses a new way to relate a question to a textual context by weaving layers of recurrent networks, with the goal of making as few assumptions as possible as to how the information from both question and context should be combined to form the answer. Expand
YodaQA: A Modular Question Answering System Pipeline
TLDR
This paper seeks to re- unite and boost research efforts in Question Answering, pro- viding a modular, open source pipeline for this task — allow- ing integration of various knowledge base paradigms, an- swer production and analysis strategies and using a machine learned models to rank the answers. Expand
Question Answering over Knowledge Base using Factual Memory Networks
TLDR
Factual Memory Network is introduced, which learns to answer questions by extracting and reasoning over relevant facts from a Knowledge Base, and improves the run-time efficiency of the model using various computational heuristics. Expand
The Web as a Knowledge-Base for Answering Complex Questions
TLDR
This paper proposes to decompose complex questions into a sequence of simple questions, and compute the final answer from the sequence of answers, and empirically demonstrates that question decomposition improves performance from 20.8 precision@1 to 27.5 precision @1 on this new dataset. Expand
Improved Neural Relation Detection for Knowledge Base Question Answering
TLDR
A hierarchical recurrent neural network enhanced by residual learning that detects KB relations given an input question is proposed and helps the KBQA system to achieve state-of-the-art accuracy for both single-relation and multi-relation QA benchmarks. Expand
...
1
2
3
4
5
...