• Corpus ID: 15027084

Knowledge Graph Embedding by Translating on Hyperplanes

@inproceedings{Wang2014KnowledgeGE,
  title={Knowledge Graph Embedding by Translating on Hyperplanes},
  author={Zhen Wang and Jianwen Zhang and Jianlin Feng and Zheng Chen},
  booktitle={AAAI},
  year={2014}
}
We deal with embedding a large scale knowledge graph composed of entities and relations into a continuous vector space. [...] Key Result Experiments show TransH delivers significant improvements over TransE on predictive accuracy with comparable capability to scale up.Expand
Locally Adaptive Translation for Knowledge Graph Embedding
TLDR
This paper proposes a locally adaptive translation method for knowledge graph embedding, called TransA, to find the optimal loss function by adaptively determining its margin over different knowledge graphs.
Learning Knowledge Graph Embeddings via Generalized Hyperplanes
TLDR
A novel translation-based method called translation on generalized hyperplanes (TransGH), which extends TransH by defining a generalized hyperplane for entities projection and can capture more fertile interactions between entities and relations, and simultaneously has strong expression in mapping properties for knowledge graphs.
Knowledge Graph Embedding with Diversity of Structures
  • Wen Zhang
  • Computer Science
    WWW
  • 2017
TLDR
This paper proposes a method to decompose ORC substructures by using two vectors to represent the entity as a head or tail entity with the same relation and shows that applying this method improves the results compared with the corresponding original results of TransH, TransR and TransD.
A neural translating general hyperplane for knowledge graph embedding
TLDR
Experimental results show that NTransGH has strong expression in mapping properties of complex relations, and achieves significant and consistent improvements over state-of-the-art embedding methods.
Generalized Translation-Based Embedding of Knowledge Graph
TLDR
This paper proposes knowledge graph embedding on a Lie group (KGLG) and the Weighted Negative Part (WNP) method for the objective function of translation-based models and shows that TorusE, KGLG on a torus, is scalable to large-size knowledge graphs and faster than the original TransE.
RatE: Relation-Adaptive Translating Embedding for Knowledge Graph Completion
TLDR
A relation-adaptive translation function built upon a novel weighted product in complex space, where the weights are learnable, relation-specific and independent to embedding size, which improves expressive power and alleviates embedding ambiguity problem.
Enhancing Knowledge Graph Embedding with Relational Constraints
TLDR
This paper elaborately design the score function by encoding regularities between a relation and its arguments into the translation-based embedding space and proposes a soft margin-based ranking loss for effectively training the KRC model, which characterizes different semantic distances between negative and positive triplets.
Efficient parallel translating embedding for knowledge graphs
TLDR
An efficient parallel framework for translating embedding methods, called ParTrans-X, is proposed, which enables the methods to be paralleled without locks by utilizing the distinguished structures of knowledge graphs.
GCN-VAE for Knowledge Graph Completion
Knowledge graphs are powerful abstraction to represent relational facts among entities. Since most real world knowledge graphs are manually collected and largely incomplete, predicting missing links
Knowledge Graph Embedding via Dynamic Mapping Matrix
TLDR
A more fine-grained model named TransD, which is an improvement of TransR/CTransR, which not only considers the diversity of relations, but also entities, which makes it can be applied on large scale graphs.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 21 REFERENCES
Translating Embeddings for Modeling Multi-relational Data
TLDR
TransE is proposed, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities, which proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases.
A semantic matching energy function for learning with multi-relational data
TLDR
A new neural network architecture designed to embed multi-relational graphs into a flexible continuous vector space in which the original data is kept and enhanced, demonstrating that it can scale up to tens of thousands of nodes and thousands of types of relation.
Learning Structured Embeddings of Knowledge Bases
TLDR
A learning process based on an innovative neural network architecture designed to embed any of these symbolic representations into a more flexible continuous vector space in which the original knowledge is kept and enhanced would allow data from any KB to be easily used in recent machine learning methods for prediction and information retrieval.
Modeling Relations and Their Mentions without Labeled Text
TLDR
A novel approach to distant supervision that can alleviate the problem of noisy patterns that hurt precision by using a factor graph and applying constraint-driven semi-supervision to train this model without any knowledge about which sentences express the relations in the authors' training KB.
A latent factor model for highly multi-relational data
TLDR
This paper proposes a method for modeling large multi-relational datasets, with possibly thousands of relations, based on a bilinear structure, which captures various orders of interaction of the data and also shares sparse latent factors across different relations.
Irreflexive and Hierarchical Relations as Translations
TLDR
Preliminary experiments show that, despite its simplicity and a smaller number of parameters than previous approaches, this approach achieves state-of-the-art performance according to standard evaluation protocols on data from WordNet and Freebase.
A Three-Way Model for Collective Learning on Multi-Relational Data
TLDR
This work presents a novel approach to relational learning based on the factorization of a three-way tensor that is able to perform collective learning via the latent components of the model and provide an efficient algorithm to compute the factorizations.
Reasoning With Neural Tensor Networks for Knowledge Base Completion
TLDR
An expressive neural tensor network suitable for reasoning over relationships between two entities given a subset of the knowledge base is introduced and performance can be improved when entities are represented as an average of their constituting word vectors.
Multi-instance Multi-label Learning for Relation Extraction
TLDR
This work proposes a novel approach to multi-instance multi-label learning for RE, which jointly models all the instances of a pair of entities in text and all their labels using a graphical model with latent variables that performs competitively on two difficult domains.
Multi-Relational Latent Semantic Analysis
TLDR
It is demonstrated that by integrating multiple relations from both homogeneous and heterogeneous information sources, MRLSA achieves state-of-the-art performance on existing benchmark datasets for two relations, antonymy and is-a.
...
1
2
3
...