Rule Learning from Knowledge Graphs Guided by Embedding Models

@inproceedings{Ho2018RuleLF,
  title={Rule Learning from Knowledge Graphs Guided by Embedding Models},
  author={Vinh Thinh Ho and Daria Stepanova and Mohamed H. Gad-Elrab and Evgeny Kharlamov and Gerhard Weikum},
  booktitle={SEMWEB},
  year={2018}
}
Rules over a Knowledge Graph (KG) capture interpretable patterns in data and various methods for rule learning have been proposed. Since KGs are inherently incomplete, rules can be used to deduce missing facts. Statistical measures for learned rules such as confidence reflect rule quality well when the KG is reasonably complete; however, these measures might be misleading otherwise. So it is difficult to learn high-quality rules from the KG alone, and scalability dictates that only a small set… 
Learning Rules from Incomplete KGs using Embeddings
TLDR
A rule learning method that utilizes probabilistic representations of missing facts is proposed that iteratively extend rules induced from a KG by relying on feedback from a precomputed embedding model over the KG and external information sources including text corpora.
Differentiable learning of numerical rules in knowledge graphs
TLDR
This work extends Neural LP to learn rules with numerical values and extracts more expressive rules with aggregates, which are of higher quality and yield more accurate predictions compared to rules learned by the state-of-the-art methods, as shown by the experiments on synthetic and real-world datasets.
Few-Shot Knowledge Validation using Rules
TLDR
Colt, a few-shot rule-based knowledge validation framework that enables the interactive quality assessment of logic rules and formalizes the problem as learning a validation function over the rule’s outcomes and study the theoretical connections to the generalized maximum coverage problem.
An Embedding-Based Approach to Rule Learning in Knowledge Graphs
TLDR
For massive knowledge graphs with hundreds of predicates and over 10M facts, RLvLR is much faster and can learn much more quality rules than major systems for rule learning in knowledge graphs such as AMIE+.
Iteratively Learning Embeddings and Rules for Knowledge Graph Reasoning
TLDR
A novel framework IterE iteratively learning embeddings and rules is proposed, in which rules are learned fromembeddings with proper pruning strategy and embeddins are learning from existing triples and new triples inferred by rules.
EngineKGI: Closed-Loop Knowledge Graph Inference
TLDR
Experimental results on four real-world datasets show that the proposed EngineKGI model outperforms other baselines on link prediction tasks, demonstrating the effectiveness and superiority of the model on KG inference in a joint logic and data-driven fashion with a closed-loop mechanism.
Learning and Deduction of Rules for Knowledge Graph Completion
TLDR
This paper adopts six types of basic rules in the framework for searching the triples in KG, and proposes an optimization algorithm RWK based on the random walk strategy and K-sized traverse to reduce the execution time of triple search.
Rule Induction and Reasoning over Knowledge Graphs
TLDR
This tutorial presents state-of-the-art rule induction methods, recent advances, research opportunities as well as open challenges along this avenue, with a particular emphasis on the problems of learning exception-enriched rules from highly biased and incomplete data.
RuDaS: Synthetic Datasets for Rule Learning and Evaluation Tools
TLDR
This work presents a tool for generating different kinds of datasets and for evaluating rule learning systems, including new performance measures.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 49 REFERENCES
Exception-Enriched Rule Learning from Knowledge Graphs
TLDR
This work presents a method for effective revision of learned Horn rules by adding exceptions (i.e., negated atoms) into their bodies and demonstrates the effectiveness of the developed method and the improvements in accuracy for KG completion by rule-based fact prediction.
Knowledge Graph Embedding with Iterative Guidance from Soft Rules
TLDR
Experimental results show that with rule knowledge injected iteratively, RUGE achieves significant and consistent improvements over state-of-the-art baselines; and despite their uncertainties, automatically extracted soft rules are highly beneficial to KG embedding, even those with moderate confidence levels.
Towards Nonmonotonic Relational Learning from Knowledge Graphs
TLDR
This work makes the first steps towards extending a rule-based approach to KGs in their original relational form, and provides preliminary evaluation results on real-world KGs, which demonstrate the effectiveness of the proposed method.
Estimating Rule Quality for Knowledge Base Completion with the Relationship between Coverage Assumption
TLDR
This work proposes a novel score function for evaluating the quality of a first-order rule learned from a knowledge base, and attempts to include information about the tuples not in the KB when evaluating thequality of a potential rule.
Jointly Embedding Knowledge Graphs and Logical Rules
TLDR
Experimental results show that joint embedding brings significant and consistent improvements over state of theart methods and enhances the prediction of new facts which cannot even be directly inferred by pure logical inference, demonstrating the capability of the method to learn more predictive embeddings.
Knowledge Base Completion Using Embeddings and Rules
TLDR
This paper proposes a novel approach which incorporates rules seamlessly into embedding models for KB completion, and formulates inference as an integer linear programming (ILP) problem, with the objective function generated fromembedding models and the constraints translated from rules.
Completeness-Aware Rule Learning from Knowledge Graphs
TLDR
Unlike in traditional association rule mining, KGs provide a setting with a high degree of incompleteness, which may result in the wrong estimation of the quality of mined rules, leading to erroneous beliefs such as all artists have won an award, or hockey players do not have children.
Training Relation Embeddings under Logical Constraints
TLDR
This work encodes logical rules about entities and relations as convex constraints in the embedding space to enforce the condition that the score of a logically entailed fact must never be less than the minimum score of an antecedent fact.
RDF2Rules: Learning Rules from RDF Knowledge Bases by Mining Frequent Predicate Cycles
TLDR
This paper proposes a novel rule learning approach named RDF2Rules for RDF knowledge bases, which uses the entity type information when generates and evaluates rules, which makes the learned rules more accurate.
Knowledge graph refinement: A survey of approaches and evaluation methods
TLDR
A survey of such knowledge graph refinement approaches, with a dual look at both the methods being proposed as well as the evaluation methodologies used.
...
1
2
3
4
5
...