PromptKG: A Prompt Learning Framework for Knowledge Graph Representation Learning and Application
@article{Xie2022PromptKGAP, title={PromptKG: A Prompt Learning Framework for Knowledge Graph Representation Learning and Application}, author={Xin Xie and Zhoubo Li and Xiaohan Wang and Shumin Deng and Feiyu Xiong and Huajun Chen and Ningyu Zhang}, journal={ArXiv}, year={2022}, volume={abs/2210.00305} }
Knowledge Graphs (KGs) often have two characteristics: heterogeneous graph structure and text-rich entity/relation information. KG representation models should consider graph structures and text se-mantics, but no comprehensive open-sourced framework is mainly designed for KG regarding informative text description. In this pa-per, we present PromptKG , a prompt learning framework for KG representation learning and application that equips the cutting-edge text-based methods, integrates a new…
3 Citations
Schema-aware Reference as Prompt Improves Data-Efficient Relational Triple and Event Extraction
- Computer ScienceArXiv
- 2022
A novel approach of schema-aware R eference A s P rompt ( RAP), which dynamically leverage schema and knowledge inherited from global (few-shot) training data for each sample, and employs a dynamic reference integration module to retrieve pertinent knowledge from the datastore as prompts during training and inference.
Relphormer: Relational Graph Transformer for Knowledge Graph Representation
- Computer ScienceArXiv
- 2022
This work proposes a new variant of Transformer for knowledge graph representation dubbed Relphormer which can dynamically sample contextualized sub-graph sequences as the input of the Transformer to alleviate the scalability issue, and proposes a novel structure-enhanced self-attention mechanism to encode the relational information and keep the globally semantic information among sub- graphs.
Commonsense Knowledge Salience Evaluation with a Benchmark Dataset in E-commerce
- Computer ScienceArXiv
- 2022
The task of supervised salience evaluation is formed, where given a CSK triple, the model is required to learn whether the triple is salient or not, and a simple but effective approach, PMI-tuning, is proposed, which shows promise for solving this novel problem.
References
SHOWING 1-10 OF 20 REFERENCES
CogKGE: A Knowledge Graph Embedding Toolkit and Benchmark for Representing Multi-source and Heterogeneous Knowledge
- Computer ScienceACL
- 2022
This paper proposes CogKGE, a knowledge graph embedding (KGE) toolkit, which aims to represent multi-source and heterogeneous knowledge and provides pre-trained embedders to discover new facts, cluster entities and check facts.
DeepKE: A Deep Learning Based Knowledge Extraction Toolkit for Knowledge Base Population
- Computer ScienceArXiv
- 2022
We present an open-source and extensible knowledge extraction toolkit DeepKE, supporting complicated low-resource, document-level and multimodal scenarios in knowledge base population. DeepKE…
Structure-Augmented Text Representation Learning for Efficient Knowledge Graph Completion
- Computer ScienceWWW
- 2021
This paper partition each triple into two asymmetric parts as in translation-based graph embedding approach, and encode both parts into contextualized representations by a Siamese-style textual encoder, which reduces the overheads by reusing graph elements’ embeddings to avoid combinatorial explosion, and enhances structured knowledge by exploring the spatial characteristics.
KG-BERT: BERT for Knowledge Graph Completion
- Computer ScienceArXiv
- 2019
This work treats triples in knowledge graphs as textual sequences and proposes a novel framework named Knowledge Graph Bidirectional Encoder Representations from Transformer (KG-BERT) to model these triples.
Sequence-to-Sequence Knowledge Graph Completion and Question Answering
- Computer ScienceACL
- 2022
It is shown that an off-the-shelf encoder-decoder Transformer model can serve as a scalable and versatile KGE model obtaining state-of- the-art results for KG link prediction and incomplete KG question answering.
SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models
- Computer ScienceACL
- 2022
This paper introduces three types of negatives: in-batch negatives, pre-batch positives, and self-negatives which act as a simple form of hard negatives which can substantially outperform embedding-based methods on several benchmark datasets.
From Discrimination to Generation: Knowledge Graph Completion with Generative Transformer
- Computer ScienceWWW
- 2022
This paper provides an approach GenKGC, which converts knowledge graph completion to sequence-to-sequence generation task with the pre-trained language model and introduces relation-guided demonstration and entity-aware hierarchical decoding for better representation learning and fast inference.
OpenKE: An Open Toolkit for Knowledge Embedding
- Computer ScienceEMNLP
- 2018
An open toolkit for knowledge embedding, which provides a unified framework and various fundamental models to embed knowledge graphs into a continuous low-dimensional space and the embeddings of some existing large-scale knowledge graphs pre-trained by OpenKE are available.
PyKEEN 1.0: A Python Library for Training and Evaluating Knowledge Graph Embeddings
- Computer ScienceJ. Mach. Learn. Res.
- 2021
PyKEEN 1.0 is re-designed and re-implemented, one of the first KGE libraries, in a community effort, and through the integration of Optuna extensive hyper-parameter optimization (HPO) functionalities are provided.
Representing Text for Joint Embedding of Text and Knowledge Bases
- Computer ScienceEMNLP
- 2015
A model is proposed that captures the compositional structure of textual relations, and jointly optimizes entity, knowledge base, and textual relation representations, and significantly improves performance over a model that does not share parameters among textual relations with common sub-structure.