Cross-lingual Entity Alignment with Incidental Supervision

  title={Cross-lingual Entity Alignment with Incidental Supervision},
  author={Muhao Chen and Weijia Shi and Ben Zhou and Dan Roth},
Much research effort has been put to multilingual knowledge graph (KG) embedding methods to address the entity alignment task, which seeks to match entities in different languagespecific KGs that refer to the same real-world object. Such methods are often hindered by the insufficiency of seed alignment provided between KGs. Therefore, we propose a new model, JEANS , which jointly represents multilingual KGs and text corpora in a shared embedding scheme, and seeks to improve entity alignment… 

Figures and Tables from this paper

Multilingual Knowledge Graph Completion with Joint Relation and Entity Alignment
ALIGNKGC is a novel task of jointly training multilingual KGC, relation alignment and entity alignment models that achieves reasonable gains in EA and RA tasks over a vanilla completion model over a KG that combines all facts without alignment, underscoring the value of joint training for these tasks.
Time-aware Entity Alignment using Temporal Relational Attention
This work proposes a novel Temporal Relational Entity Alignment method (TREA) which is able to learn alignment-oriented TKG embeddings and represent new emerging entities and Experimental results show that the method outperforms the state-of-the-art EA methods.
ICLEA: Interactive Contrastive Learning for Self-supervised Entity Alignment
Experimental results show that the approach outperforms previous best self-supervised results by a large margin and performs on par with previous SOTA supervised counterparts, demonstrating the effectiveness of the interactive contrastive learning for self- supervised EA.
Knowing the No-match: Entity Alignment with Dangling Cases
It is discovered that the dangling entity detection module can, in turn, improve alignment learning and the final performance, and an incorporated entity alignment model in this framework can provide more robust alignment for remaining entities.
Prix-LM: Pretraining for Multilingual Knowledge Base Construction
P Prix-LM integrates useful multilingual and KB-based factual knowledge into a single model and demonstrates its effectiveness on standard entityrelated tasks, such as link prediction in multiple languages, cross-lingual entity linking and bilingual lexicon induction.
Adversarial Attack against Cross-lingual Knowledge Graph Alignment
An adversarial attack model with two novel attack techniques to perturb the KG structure and degrade the quality of deep cross-lingual entity alignment is proposed.
Are Negative Samples Necessary in Entity Alignment?: An Approach with High Performance, Scalability and Robustness
This work proposes a novel EA method with three new components to enable high Performance, high Scalability, and high Robustness (PSR), and shows that PSR not only surpasses the previous SOTA in performance but also has impressive scalability and robustness.
From Alignment to Assignment: Frustratingly Simple Unsupervised Entity Alignment
This work successfully transform the cross-lingual EA problem into an assignment problem and proposes a frustratingly Simple but Effective Unsupervised entity alignment method (SEU) without neural networks that even beats advanced supervised methods across all public datasets while having high efficiency, interpretability, and stability.
XLM-K: Improving Cross-Lingual Language Model Pre-Training with Multilingual Knowledge
XlM-K is proposed, a cross-lingual language model incorporating multilingual knowledge in pre-training with two knowledge tasks, namely Masked Entity Prediction Task and Object Entailment Task and a detailed probing analysis to capture the desired knowledge captured in the pre- training regimen.


Cross-lingual Knowledge Graph Alignment via Graph Convolutional Networks
This paper proposes a novel approach for cross-lingual KG alignment via graph convolutional networks (GCNs) given a set of pre-aligned entities, and trains GCNs to embed entities of each language into a unified vector space.
Co-training Embeddings of Knowledge Graphs and Entity Descriptions for Cross-lingual Entity Alignment
This paper introduces an embedding-based approach which leverages a weakly aligned multilingual KG for semi-supervised cross-lingual learning using entity descriptions and shows that the performance of the proposed approach on the entity alignment task improves at each iteration of co-training, and eventually reaches a stage at which it significantly surpasses previous approaches.
Aligning Cross-Lingual Entities with Multi-Aspect Information
This work investigates embedding-based approaches to encode entities from multilingual KGs into the same vector space, where equivalent entities are close to each other, and applies graph convolutional networks to combine multi-aspect information of entities to learn entity embeddings.
Cross-Lingual Entity Alignment via Joint Attribute-Preserving Embedding
The experimental results on real-world datasets show that this approach significantly outperforms the state-of-the-art embedding approaches for cross-lingual entity alignment and could be complemented with methods based on machine translation.
Modeling Multi-mapping Relations for Precise Cross-lingual Entity Alignment
Experimental results show that this approach significantly outperforms many other embedding-based approaches with state-of-the-art performance and proposes a weighted negative sampling strategy to generate valuable negative samples during training and regard prediction as a bidirectional problem in the end.
Jointly Learning Entity and Relation Representations for Entity Alignment
This paper presents a novel joint learning framework for entity alignment that is a Graph Convolutional Network (GCN) based framework for learning both entity and relation representations and incorporates the relation approximation into entities to iteratively learn better representations for both.
Improving Cross-lingual Entity Alignment via Optimal Transport
A novel entity alignment framework (OTEA) is proposed, which dually optimizes the entitylevel loss and group-level loss via optimal transport theory and imposes a regularizer on the dual translation matrices to mitigate the effect of noise during transformation.
Relation-Aware Entity Alignment for Heterogeneous Knowledge Graphs
A novel Relation-aware Dual-Graph Convolutional Network is proposed to incorporate relation information via attentive interactions between the knowledge graph and its dual relation counterpart, and further capture neighboring structures to learn better entity representations.
Multilingual Knowledge Graph Embeddings for Cross-lingual Knowledge Alignment
MTransE, a translation-based model for multilingual knowledge graph embeddings, is proposed to provide a simple and automated solution to achieve cross-lingual knowledge alignment and explore how MTransE preserves the key properties of its monolingual counterpart.
Joint Multilingual Supervision for Cross-lingual Entity Linking
This work develops the first XEL approach that combines supervision from multiple languages jointly, and trains a single entity linking model for multiple languages, improving upon individually trained models for each language.