• Corpus ID: 231924705

Reasoning Over Virtual Knowledge Bases With Open Predicate Relations

@article{Sun2021ReasoningOV,
  title={Reasoning Over Virtual Knowledge Bases With Open Predicate Relations},
  author={Haitian Sun and Pat Verga and Bhuwan Dhingra and Ruslan Salakhutdinov and William W. Cohen},
  journal={ArXiv},
  year={2021},
  volume={abs/2102.07043}
}
We present the Open Predicate Query Language (OPQL); a method for constructing a virtual KB (VKB) trained entirely from text. Large Knowledge Bases (KBs) are indispensable for a widerange of industry applications such as question answering and recommendation. Typically, KBs encode world knowledge in a structured, readily accessible form derived from laborious human annotation efforts. Unfortunately, while they are extremely high precision, KBs are inevitably highly incomplete and automated… 

Figures and Tables from this paper

End-to-End Multihop Retrieval for Compositional Question Answering over Long Documents
TLDR
This paper proposes a multihop retrieval method, DOCHOPPER, to answer compositional questions over long documents and demonstrates that utilizing document structure in this was can largely improve question-answering and retrieval performance on long documents.
Pre-Trained Models: Past, Present and Future
MENTION MEMORY : INCORPORATING TEXTUAL KNOWLEDGE INTO TRANSFORMERS THROUGH ENTITY MENTION ATTENTION
TLDR
The proposed model - TOME - is a Transformer that accesses the information through internal memory layers in which each entity mention in the input passage attends to the mention memory, which enables synthesis of and reasoning over many disparate sources of information within a single Transformer model.
A Review on Language Models as Knowledge Bases
TLDR
This paper presents a set of aspects that it is deemed an LM should have to fully act as a KB, and reviews the recent literature with respect to those aspects.
Mention Memory: incorporating textual knowledge into Transformers through entity mention attention
TLDR
The proposed model - TOME - is a Transformer that accesses the information through internal memory layers in which each entity mention in the input passage attends to the mention memory, which enables synthesis of and reasoning over many disparate sources of information within a single Transformer model.
QAMPARI: : An Open-domain Question Answering Benchmark for Questions with Many Answers from Multiple Paragraphs
TLDR
QQA models from the retrieve-and-read family are trained, showing that QAMP AR I is challenging in terms of both passage retrieval and answer generation, reaching an F 1 score of 26.6 at best.
Faithful to the Document or to the World? Mitigating Hallucinations via Entity-linked Knowledge in Abstractive Summarization
TLDR
By utilizing an external knowledge base, the faithfulness of summaries can be improved without simply making them more extractive, and it is shown that external knowledge bases linked from the source can benefit the factuality of generated summaries.
Augmenting Pre-trained Language Models with QA-Memory for Open-Domain Question Answering
TLDR
A new QA system which aug-ments a text-to-text model with a large memory of question-answer pairs, and a new pre-training task for the latent step of question retrieval, which greatly improves performance on smaller QA benchmarks.
Relation-Guided Pre-Training for Open-Domain Question Answering
TLDR
It is demonstrated that by pre-training with propoed RGPT-QA techique, the popular open-domain QA model, Dense Passage Retriever, achieves 2.2%, 2.4%, and 6.3% absolute improvement in Exact Match accuracy on Natural Questions, TriviaQA, and WebQuestions.
Complex Knowledge Base Question Answering: A Survey
TLDR
A review of recent advances on KBQA with the focus on solving complex questions, which usually contain multiple subjects, express compound relations, or involve numerical operations, and two mainstream categories of methods, namely semantic parsing- based (SP-based) methods and information retrieval-based (IR) methods.
...
...

References

SHOWING 1-10 OF 41 REFERENCES
Differentiable Reasoning over a Virtual Knowledge Base
TLDR
A neural module, DrKIT, that traverses textual data like a virtual KB, softly following paths of relations between mentions of entities in the corpus, which improves accuracy by 9 points on 3-hop questions in the MetaQA dataset and is very efficient.
Open Domain Question Answering Using Early Fusion of Knowledge Bases and Text
TLDR
A novel model is proposed, GRAFT-Net, for extracting answers from a question-specific subgraph containing text and Knowledge Bases entities and relations that is competitive with the state-of-the-art when tested using either KBs or text alone, and vastly outperforms existing methods in the combined setting.
Leveraging Linguistic Structure For Open Domain Information Extraction
TLDR
This work replaces this large pattern set with a few patterns for canonically structured sentences, and shifts the focus to a classifier which learns to extract self-contained clauses from longer sentences to determine the maximally specific arguments for each candidate triple.
Faithful Embeddings for Knowledge Base Queries
TLDR
A novel QE method is addressed that is more faithful to deductive reasoning, and this leads to better performance on complex queries to incomplete KBs and inserting this new QE module into a neural question-answering system leads to substantial improvements over the state-of-the-art.
Variational Reasoning for Question Answering with Knowledge Graph
TLDR
This work proposes a novel and unified deep learning architecture, and an end-to-end variational learning algorithm which can handle noise in questions, and learn multi-hop reasoning simultaneously.
The Web as a Knowledge-Base for Answering Complex Questions
TLDR
This paper proposes to decompose complex questions into a sequence of simple questions, and compute the final answer from the sequence of answers, and empirically demonstrates that question decomposition improves performance from 20.8 precision@1 to 27.5 precision @1 on this new dataset.
Key-Value Memory Networks for Directly Reading Documents
TLDR
This work introduces a new method, Key-Value Memory Networks, that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation.
PullNet: Open Domain Question Answering with Iterative Retrieval on Knowledge Bases and Text
TLDR
PullNet is described, an integrated framework for learning what to retrieve and reasoning with this heterogeneous information to find the best answer in an open-domain question answering setting.
Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base
TLDR
This work proposes a novel semantic parsing framework for question answering using a knowledge base that leverages the knowledge base in an early stage to prune the search space and thus simplifies the semantic matching problem.
CaRe: Open Knowledge Graph Embeddings
TLDR
Canonicalization-infused Representations (CaRe) is proposed and observed that CaRe enables existing models to adapt to the challenges in OpenKGs and achieve substantial improvements for the link prediction task.
...
...