Finding Interpretable Concept Spaces in Node Embeddings using Knowledge Bases

  title={Finding Interpretable Concept Spaces in Node Embeddings using Knowledge Bases},
  author={Maximilian Idahl and Megha Khosla and Avishek Anand},
In this paper we propose and study the novel problem of explaining node embeddings by finding embedded human interpretable subspaces in already trained unsupervised node representation embeddings. We use an external knowledge base that is organized as a taxonomy of human-understandable concepts over entities as a guide to identify subspaces in node embeddings learned from an entity graph derived from Wikipedia. We propose a method that given a concept finds a linear transformation to a subspace… 

ExCut: Explainable Embedding-Based Clustering over Knowledge Graphs

ExCut is presented, a novel approach that combines KG embeddings with rule mining methods, to compute informative clusters of entities along with comprehensible explanations, in an iterative manner that interleaves the learning ofembeddings and rules.

Learnt Sparsification for Interpretable Graph Neural Networks

This paper proposes a novel method called KEdge for explicitly sparsification using the Hard Kumaraswamy distribution that can be used in conjugation with any GNN model and effectively counters the over-smoothing phenomena in deep GNNs by maintaining good task performance with increasing GNN layers.

A Neural-symbolic Approach for Ontology-mediated Query Answering

This work introduces a neural-symbolic method for ontology-mediated CQ answering over incomplete KGs that operates in the embedding space and proposes various data augmentation strategies to generate training queries using query-rewriting based methods and exploits a novel loss function for training the model.

Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks

A novel approach Zorro based on the principles from rate-distortion theory that uses a simple combinatorial procedure to optimize for RDT-Fidelity is proposed and introduced as a measure of the explanation’s effectiveness.

Learning Ideological Embeddings from Information Cascades

A stochastic model to learn the ideological leaning of each user in a multidimensional ideological space, by analyzing the way politically salient content propagates, is proposed.

KnAC: an approach for enhancing cluster analysis with background knowledge and explanations

Knowledge Augmented Clustering is presented, which can serve as an augmentation of an arbitrary clustering algorithm, making the approach robust and a model-agnostic improvement of any state-of-the-art clustering method.

The Semantic Web – ISWC 2020: 19th International Semantic Web Conference, Athens, Greece, November 2–6, 2020, Proceedings, Part I

This work extends previous work on privacy-preserving ontology publishing, in which a very restricted form of ABoxes, called instance stores, had been considered, but restricts the attention to compliance, to a setting where the data are given as Description Logic ABoxe with possibly anonymised individuals and the privacy policies are expressed using sets of concepts of the DL EL.

Privacy and Transparency in Graph Machine Learning: A Unified Perspective

This position paper provides a unified perspective on the interplay of privacy and transparency in GraphML and describes the challenges and possible research directions for a formal investigation of privacy-transparency tradeoffs in graphML.



Inductive Representation Learning on Large Graphs

GraphSAGE is presented, a general, inductive framework that leverages node feature information (e.g., text attributes) to efficiently generate node embeddings for previously unseen data and outperforms strong baselines on three inductive node-classification benchmarks.

Regularizing Knowledge Graph Embeddings via Equivalence and Inversion Axioms

A principled and scalable method for leveraging equivalence and inversion axioms during the learning process, by imposing a set of model-dependent soft constraints on the predicate embeddings, which consistently improves the predictive accuracy of several neural knowledge graph embedding models without compromising their scalability properties.

Entity Embeddings with Conceptual Subspaces as a Basis for Plausible Reasoning

This work proposes a method which learns a vector-space embedding of entities from Wikipedia and constrains this embedding such that entities of the same semantic type are located in some lower-dimensional subspace, and experimentally demonstrates the usefulness of these subspaces as conceptual space representations.

Knowledge Graph Embeddings with node2vec for Item Recommendation

This paper applies node2vec on a knowledge graph built from the MovieLens 1M dataset and DBpedia and uses the node relatedness to generate item recommendations, and shows that node1vec consistently outperforms a set of collaborative filtering baselines on an array of relevant metrics.

MEmbER: Max-Margin Based Embeddings for Entity Retrieval

This work proposes a new class of methods for learning vector space embeddings of entities that are interpretable, in the sense that query terms have a direct geometric representation in the vector space.

Question Answering with Subgraph Embeddings

A system which learns to answer questions on a broad range of topics from a knowledge base using few hand-crafted features, using low-dimensional embeddings of words and knowledge base constituents to score natural language questions against candidate answers.

Node Representation Learning for Directed Graphs

A novel approach for learning node representations in directed graphs, which maintains separate views or embedding spaces for the two distinct node roles induced by the directionality of the edges, which is considerably more robust than the other directed graph approaches.

node2vec: Scalable Feature Learning for Networks

In node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks, a flexible notion of a node's network neighborhood is defined and a biased random walk procedure is designed, which efficiently explores diverse neighborhoods.

An Empirical Comparison of Knowledge Graph Embeddings for Item Recommendation

It is shown that the item recommendation problem can be seen as a specific case of knowledge graph completion problem, where the “feedback” property, which connects users to items that they like, has to be predicted.

GraRep: Learning Graph Representations with Global Structural Information

A novel model for learning vertex representations of weighted graphs that integrates global structural information of the graph into the learning process and significantly outperforms other state-of-the-art methods in such tasks.