Knowledge Graph Embedding for Link Prediction and Triplet Classification

@inproceedings{Shijia2016KnowledgeGE,
  title={Knowledge Graph Embedding for Link Prediction and Triplet Classification},
  author={E. Shijia and Shengbin Jia and Yang Xiang and Zilian Ji},
  booktitle={CCKS},
  year={2016}
}
The link prediction (LP) and triplet classification (TC) are important tasks in the field of knowledge graph mining. However, the traditional link prediction methods of social networks cannot directly apply to knowledge graph data which contains multiple relations. In this paper, we apply the knowledge graph embedding method to solve the specific tasks with Chinese knowledge base Zhishi.me. The proposed method has been successfully used in the evaluation task of CCKS2016. Hopefully, it can… Expand
Triple Scoring Using a Hybrid Fact Validation Approach - The Catsear Triple Scorer at WSDM Cup 2017
TLDR
This work describes the participation of the Catsear team in the Triple Scoring Challenge at the WSDM Cup 2017 and shows how their approach achieved an Accuracy2 value of 79.58% and the overall 4th place. Expand
Discovering functionality of urban regions by learning low-dimensional representations of a spatial multiplex network
TLDR
This work introduces a novel neural network architecture to jointly learn low-dimensional representations of each network node from multiple layers of a network and indicates that the proposed approach can improve the accuracy of traditional approached in an unsupervised task. Expand
Challenges and opportunities: from big data to knowledge in AI 2.0
TLDR
It is concluded that integrating data-driven machine learning with human knowledge can effectively lead to explainable, robust, and general AI. Expand

References

SHOWING 1-10 OF 10 REFERENCES
Transition-based Knowledge Graph Embedding with Relational Mapping Properties
TLDR
A superior model is proposed to leverage the structure of the knowledge graph via pre-calculating the distinct weight for each training triplet according to its relational mapping property, and is compared with the state-of-the-art method TransE and other prior arts. Expand
The link-prediction problem for social networks
TLDR
Experiments on large coauthorship networks suggest that information about future interactions can be extracted from network topology alone, and that fairly subtle measures for detecting node proximity can outperform more direct measures. Expand
Translating Embeddings for Modeling Multi-relational Data
TLDR
TransE is proposed, a method which models relationships by interpreting them as translations operating on the low-dimensional embeddings of the entities, which proves to be powerful since extensive experiments show that TransE significantly outperforms state-of-the-art methods in link prediction on two knowledge bases. Expand
Holographic Embeddings of Knowledge Graphs
TLDR
Holographic embeddings are proposed to learn compositional vector space representations of entire knowledge graphs to outperform state-of-the-art methods for link prediction on knowledge graphs and relational learning benchmark datasets. Expand
Learning Entity and Relation Embeddings for Knowledge Graph Completion
TLDR
TransR is proposed to build entity and relation embeddings in separate entity space and relation spaces to build translations between projected entities and to evaluate the models on three tasks including link prediction, triple classification and relational fact extraction. Expand
Distributed Representations of Sentences and Documents
TLDR
Paragraph Vector is an unsupervised algorithm that learns fixed-length feature representations from variable-length pieces of texts, such as sentences, paragraphs, and documents, and its construction gives the algorithm the potential to overcome the weaknesses of bag-of-words models. Expand
Convolutional Neural Networks for Sentence Classification
TLDR
The CNN models discussed herein improve upon the state of the art on 4 out of 7 tasks, which include sentiment analysis and question classification, and are proposed to allow for the use of both task-specific and static vectors. Expand
Learning representations by back-propagating errors
TLDR
Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain. Expand
Adam: A Method for Stochastic Optimization
TLDR
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Expand
Dropout: a simple way to prevent neural networks from overfitting
TLDR
It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets. Expand