Embedding Entities and Relations for Learning and Inference in Knowledge Bases
@article{Yang2014EmbeddingEA, title={Embedding Entities and Relations for Learning and Inference in Knowledge Bases}, author={Bishan Yang and Wen-tau Yih and Xiaodong He and Jianfeng Gao and Li Deng}, journal={CoRR}, year={2014}, volume={abs/1412.6575} }
Abstract: We consider learning representations of entities and relations in KBs using the neural-embedding approach. [] Key Method Under this framework, we compare a variety of embedding models on the link prediction task. We show that a simple bilinear formulation achieves new state-of-the-art results for the task (achieving a top-10 accuracy of 73.2% vs. 54.7% by TransE on Freebase).
1,981 Citations
A Co-Embedding Model with Variational Auto-Encoder for Knowledge Graphs
- Computer ScienceApplied Sciences
- 2022
This paper incorporates a co-embedding model for KG embedding, which learns low-dimensional representations of both entities and relations in the same semantic space and proposes a variational auto-encoder that represents KG components as Gaussian distributions.
Regularizing graph embeddings via equivalence and inversion axioms
- Computer Science
- 2018
This paper proposes a principled and scalable method for leveraging equivalence and inversion axioms during the learning process, by imposing a set of model-dependent soft constraints on the predicate embeddings.
Regularizing Knowledge Graph Embeddings via Equivalence and Inversion Axioms
- Computer ScienceECML/PKDD
- 2017
A principled and scalable method for leveraging equivalence and inversion axioms during the learning process, by imposing a set of model-dependent soft constraints on the predicate embeddings, which consistently improves the predictive accuracy of several neural knowledge graph embedding models without compromising their scalability properties.
TransRHS: A Representation Learning Method for Knowledge Graphs with Relation Hierarchical Structure
- Computer ScienceIJCAI
- 2020
This paper proposes a novel method named TransRHS, which is able to incorporate RHS seamlessly into the embeddings of knowledge graphs, and can effectively and efficiently fuse RHS into knowledge graph embeddeddings.
RELWALK – A LATENT VARIABLE MODEL APPROACH
- Computer Science
- 2018
A learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph is proposed, and it is shown that marginal loss minimisation follows naturally from the log-likelihood ratio maximisation under the probabilities estimated from the K GEs.
Time-Aware Representation Learning of Knowledge Graphs
- Computer Science2021 International Joint Conference on Neural Networks (IJCNN)
- 2021
The proposed RTS model achieves state-of-the-art results in three experiments conducted on two datasets: YAGO11k and Wikidata12k, which validates the effectiveness of the model and shows that the findings can be used to simplify other existing models like HyTE.
Representing and learning relations and properties under uncertainty
- Computer Science
- 2018
A class of inference algorithms known as lifted inference which makes inference tractable by exploiting both conditional independence and symmetries is studied, and relational neural networks which combine ideas from lifted relational models with deep learning and perform well empirically are developed.
Learning Attention-based Embeddings for Relation Prediction in Knowledge Graphs
- Computer ScienceACL
- 2019
This paper proposes a novel attention-based feature embedding that captures both entity and relation features in any given entity’s neighborhood and encapsulate relation clusters and multi-hop relations in the model.
TransEdge: Translating Relation-Contextualized Embeddings for Knowledge Graphs
- Computer ScienceSEMWEB
- 2019
A novel edge-centric embedding model TransEdge is proposed, which contextualizes relation representations in terms of specific head-tail entity pairs and interprets them as translations between entity embeddings.
A Triple-Branch Neural Network for Knowledge Graph Embedding
- Computer ScienceIEEE Access
- 2018
TBNN, a triple-branch neural network to learn the embeddings of KGs, where the embedding of any element of a KG is determined by its multi-restriction via an interaction layer followed by parallel branched layers, resulting in stable performance for relations with different mapping properties.
References
SHOWING 1-10 OF 39 REFERENCES
Translating Embeddings for Modeling Multi-relational Data
- Computer ScienceNIPS
- 2013
TransE is proposed, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities, which proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases.
Learning Structured Embeddings of Knowledge Bases
- Computer ScienceAAAI
- 2011
A learning process based on an innovative neural network architecture designed to embed any of these symbolic representations into a more flexible continuous vector space in which the original knowledge is kept and enhanced would allow data from any KB to be easily used in recent machine learning meth- ods for prediction and information retrieval.
Reasoning With Neural Tensor Networks for Knowledge Base Completion
- Computer ScienceNIPS
- 2013
An expressive neural tensor network suitable for reasoning over relationships between two entities given a subset of the knowledge base is introduced and performance can be improved when entities are represented as an average of their constituting word vectors.
A latent factor model for highly multi-relational data
- Computer ScienceNIPS
- 2012
This paper proposes a method for modeling large multi-relational datasets, with possibly thousands of relations, based on a bilinear structure, which captures various orders of interaction of the data and also shares sparse latent factors across different relations.
A semantic matching energy function for learning with multi-relational data
- Computer ScienceMachine Learning
- 2013
A new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced, demonstrating that it can scale up to tens of thousands of nodes and thousands of types of relation.
Typed Tensor Decomposition of Knowledge Bases for Relation Extraction
- Computer ScienceEMNLP
- 2014
A tensor decomposition approach for knowledge base embedding that is highly scalable, and is especially suitable for relation extraction by leveraging relational domain knowledge about entity type information, which is significantly faster than previous approaches and better able to discover new relations missing from the database.
A Three-Way Model for Collective Learning on Multi-Relational Data
- Computer ScienceICML
- 2011
This work presents a novel approach to relational learning based on the factorization of a three-way tensor that is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorizations.
Modeling Interestingness with Deep Neural Networks
- Computer ScienceEMNLP
- 2014
The results on large-scale, real-world datasets show that the semantics of documents are important for modeling interestingness and that the DSSM leads to significant quality improvement on both tasks, outperforming not only the classic document models that do not use semantics but also state-of-the-art topic models.
Factorizing YAGO: scalable machine learning for linked data
- Computer ScienceWWW
- 2012
This work presents an efficient approach to relational learning on LOD data, based on the factorization of a sparse tensor that scales to data consisting of millions of entities, hundreds of relations and billions of known facts, and shows how ontological knowledge can be incorporated in the factorizations to improve learning results and how computation can be distributed across multiple nodes.
Learning deep structured semantic models for web search using clickthrough data
- Computer ScienceCIKM
- 2013
A series of new latent semantic models with a deep structure that project queries and documents into a common low-dimensional space where the relevance of a document given a query is readily computed as the distance between them are developed.