Expanding Holographic Embeddings for Knowledge Completion

@inproceedings{Xue2018ExpandingHE,
  title={Expanding Holographic Embeddings for Knowledge Completion},
  author={Yexiang Xue and Yang Yuan and Zhitian Xu and Ashish Sabharwal},
  booktitle={NeurIPS},
  year={2018}
}
Neural models operating over structured spaces such as knowledge graphs require a continuous embedding of the discrete elements of this space (such as entities) as well as the relationships between them. Relational embeddings with high expressivity, however, have high model complexity, making them computationally difficult to train. We propose a new family of embeddings for knowledge graphs that interpolate between a method with high model complexity and one, namely Holographic embeddings, with… CONTINUE READING

Figures, Tables, Results, and Topics from this paper.

Key Quantitative Results

  • On the Freebase FB15K dataset, HolEx outperforms original holographic embeddings by 13.7% on the HITS@10 metric, and the current state-of-the-art by 3.1% (absolute).1 Introduction Relations, as a key concept in artificial intelligence and machine learning, allow human beings as well as intelligent systems to learn and reason about the world.
  • In terms of the standard HITS@10 metric, HOLEX using 16 random 0/1 vectors outperforms the original HOLE by 14.7% (absolute), ProjE by 5.7%, and a path-based state-of-the-art method by 4%.

Citations

Publications citing this paper.

AutoKGE: Searching Scoring Functions for Knowledge Graph Embedding

VIEW 4 EXCERPTS
CITES METHODS & BACKGROUND
HIGHLY INFLUENCED

References

Publications referenced by this paper.
SHOWING 1-10 OF 32 REFERENCES