Combining Vector Space Embeddings with Symbolic Logical Inference over Open-Domain Text
@inproceedings{Gardner2015CombiningVS, title={Combining Vector Space Embeddings with Symbolic Logical Inference over Open-Domain Text}, author={Matt Gardner and Partha P. Talukdar and Tom Michael Mitchell}, booktitle={AAAI Spring Symposia}, year={2015} }
We have recently shown how to combine random walk inference over knowledge bases with vector space representations of surface forms, improving performance on knowledge base inference. In this paper, we formalize the connection of our prior work to logical inference rules, giving some general observations about methods for incorporating vector space representations into symbolic logic systems. Additionally, we present some promising preliminary work that extends these techniques to learning open…
Tables from this paper
15 Citations
Explaining automatic answers generated from knowledge base embedding models.
- Computer Science
- 2022
This work improves an existing method designed to provide explanations for predictions made by embedding models, and focuses on non-relational classifiers (such as deep neural networks).
Interpreting embedding models of knowledge bases.
- Computer Science
- 2018
This work proposes model-agnostic methods that allow one to interpret embedding models by extracting weighted Horn rules from them, and shows how the so-called “pedagogical techniques”, from the literature on neural networks, can be adapted to take into account the large-scale relational aspects of knowledge bases.
Reading and Reasoning with Knowledge Graphs
- Computer Science
- 2015
This thesis presents methods for reasoning over very large knowledge bases, and shows how to apply these methods to models of machine reading, which can successfully incorporate knowledge base information into machine learning models of natural language.
On the Capabilities and Limitations of Reasoning for Natural Language Understanding
- Computer ScienceArXiv
- 2019
This work presents the first formal framework to study empirical observations of linguistic variability in undirected graphs, addressing the ambiguity, redundancy, incompleteness, and inaccuracy that the use of language introduces when representing a hidden conceptual space.
Efficient and Expressive Knowledge Base Completion Using Subgraph Feature Extraction
- Computer ScienceEMNLP
- 2015
It is shown that the random walk probabilities computed by PRA provide no discernible benefit to performance on this task, so they can safely be dropped, and this allows a simpler algorithm for generating feature matrices from graphs, which is called subgraph feature extraction (SFE).
Interpreting Embedding Models of Knowledge Bases: A Pedagogical Approach
- Computer ScienceICML 2018
- 2018
It is shown how pedagogical approaches have to be adapted to take upon the large-scale relational aspects of knowledge bases and show experimentally their strengths and weaknesses.
Reasoning-Driven Question-Answering for Natural Language Understanding
- Computer ScienceArXiv
- 2019
This thesis proposes a formulation for abductive reasoning in natural language and shows its effectiveness, especially in domains with limited training data, and presents the first formal framework for multi-step reasoning algorithms, in the presence of a few important properties of language use.
Knowledge Graph Embedding with Iterative Guidance from Soft Rules
- Computer ScienceAAAI
- 2018
Experimental results show that with rule knowledge injected iteratively, RUGE achieves significant and consistent improvements over state-of-the-art baselines; and despite their uncertainties, automatically extracted soft rules are highly beneficial to KG embedding, even those with moderate confidence levels.
A Graph-Based Framework for Structured Prediction Tasks in Sanskrit
- Computer ScienceComputational Linguistics
- 2021
This work proposes a framework using energy-based models for multiple structured prediction tasks in Sanskrit, which is an arc-factored model, similar to the graph-based parsing approaches, that enables it to incorporate language-specific constraints to prune the search space and to filter the candidates during inference.
Learning to Reason with a Scalable Probabilistic Logic
- Computer Science
- 2015
A new, scalable probabilistic logic called ProPPR is proposed to combine the best of the symbolic and statistical worlds, and a second-order abductive theory, whose parameter corresponds to plausible first-order inference rules is proposed.
11 References
Incorporating Vector Space Similarity in Random Walk Inference over Knowledge Bases
- Computer ScienceEMNLP
- 2014
A new technique for combining KB relations and surface text into a single graph representation that is much more compact than graphs used in prior work is presented, and how to incorporate vector space similarity into random walk inference over KBs is described.
Improving Learning and Inference in a Large Knowledge-Base using Latent Syntactic Cues
- Computer ScienceEMNLP
- 2013
For the first time, it is demonstrated that addition of edges labeled with latent features mined from a large dependency parsed corpus of 500 million Web documents can significantly outperform previous PRAbased approaches on the KB inference task.
Random Walk Inference and Learning in A Large Scale Knowledge Base
- Computer ScienceEMNLP
- 2011
It is shown that a soft inference procedure based on a combination of constrained, weighted, random walks through the knowledge base graph can be used to reliably infer new beliefs for theknowledge base.
Scaling Textual Inference to the Web
- Computer ScienceEMNLP
- 2008
The Holmes system, which utilizes textual inference over tuples extracted from text to scale TI to a corpus of 117 million Web pages, and its runtime is linear in the size of its input corpus.
Programming with personalized pagerank: a locally groundable first-order probabilistic logic
- Computer ScienceCIKM
- 2013
A first-order probabilistic language which is well-suited to approximate "local" grounding: in particular, every query can be approximately grounded with a small graph.
Learning First-Order Horn Clauses from Web Text
- Computer ScienceEMNLP
- 2010
This paper investigates the problem of learning inference rules from Web text in an unsupervised, domain-independent manner and shows that inference over the learned rules discovers three times as many facts as the TextRunner system which merely extracts facts explicitly stated in Web text.
Acquiring temporal constraints between relations
- Computer ScienceCIKM
- 2012
The proposed algorithm, GraphOrder, is a novel and scalable graph-based label propagation algorithm that takes transitivity of temporal order into account, as well as these statistics on narrative order of verb mentions, and achieves as high as 38.4% absolute improvement in F1 over a random baseline.
Relational retrieval using a combination of path-constrained random walks
- Computer ScienceMachine Learning
- 2010
A novel learnable proximity measure is described which instead uses one weight per edge label sequence: proximity is defined by a weighted combination of simple “path experts”, each corresponding to following a particular sequence of labeled edges.
Toward an Architecture for Never-Ending Language Learning
- Computer ScienceAAAI
- 2010
This work proposes an approach and a set of design principles for an intelligent computer agent that runs forever and describes a partial implementation of such a system that has already learned to extract a knowledge base containing over 242,000 beliefs.
A study of the knowledge base requirements for passing an elementary science test
- Computer ScienceAKBC '13
- 2013
The analysis suggests that as well as fact extraction from text and statistically driven rule extraction, three other styles of automatic knowledge base construction (AKBC) would be useful: acquiring definitional knowledge, direct 'reading' of rules from texts that state them, and, given a particular representational framework, acquisition of specific instances of those models from text.