Explainable zero-shot learning via attentive graph convolutional network and knowledge graphs

@article{Geng2021ExplainableZL,
  title={Explainable zero-shot learning via attentive graph convolutional network and knowledge graphs},
  author={Yuxia Geng and Jiaoyan Chen and Zhiquan Ye and Wei Zhang and Huajun Chen},
  journal={Semantic Web},
  year={2021},
  volume={12},
  pages={741-765}
}
Zero-shot learning (ZSL) which aims to deal with new classes that have never appeared in the training data (i.e., unseen classes) has attracted massive research interests recently. Transferring of deep features learned from training classes (i.e., seen classes) are often used, but most current methods are black-box models without any explanations, especially textual explanations that are more acceptable to not only machine learning specialists but also common people without artificial… 
K-ZSL: Resources for Knowledge-driven Zero-shot Learning
TLDR
This paper proposed 5 resources for KG-based research in zero-shot image classification and zero- shot KG completion and contributed a benchmark and its KG with semantics ranging from text to attributes, from relational knowledge to logical expressions.
Disentangled Ontology Embedding for Zero-shot Learning
TLDR
This paper proposes to learn disentangled ontology embeddings guided by ontology properties to capture and utilize more fine-grained class relationships in different aspects of Zero-shot Learning.
Match Them Up: Visually Explainable Few-shot Image Classification
TLDR
A new way to perform FSL for image classification, using visual representations from the backbone model and weights generated by a newly-emerged explainable classifier to achieve both good accuracy and satisfactory explainability on three mainstream datasets is revealed.
Knowledge-aware Zero-Shot Learning: Survey and Perspective
TLDR
A literature review towards ZSL is presented in the perspective of external knowledge, where external knowledge is categorized, their methods and compare different external knowledge are reviewed.
MTUNet: Few-shot Image Classification with Visual Explanations
TLDR
A new way to perform explainable FSL for image classification, using discriminative patterns and pairwise matching is revealed, and results prove that the proposed method can achieve satisfactory explainability on two mainstream datasets.
Ontology-guided Semantic Composition for Zero-Shot Learning
TLDR
This study proposes to model the compositional and expressive semantics of class labels by an OWL (Web Ontology Language) ontology, and further develop a new ZSL framework with ontology embedding.
A Novel GCN Architecture for Text Generation from Knowledge Graphs: Full Node Embedded Strategy and Context Gate with Copy and Penalty Mechanism
TLDR
A novel neural network architecture called GCN-FCCP, which is based on Graph Convolutional Network enabled by a Full node embedded strategy and Context gates with Copy and Penalty mechanism, and can effectively generate high-quality text from graph-structured input, which obtains high scores in four automatic metrics.
Benchmarking Knowledge-driven Zero-shot Learning
TLDR
This research presents a novel and scalable approaches to solve the challenge of integrating NoSQL data stores to manage memory-intensive systems such as NoSQL.
Low-resource Learning with Knowledge Graphs: A Comprehensive Survey
TLDR
This research presents a meta-modelling framework for solving the challenge of integrating NoSQL data stores to manage distributed systems and provide real-time information about their architecture and operation.

References

SHOWING 1-10 OF 95 REFERENCES
Generative Adversarial Zero-shot Learning via Knowledge Graphs
TLDR
This paper introduces a new generative ZSL method named KG-GAN by incorporating rich semantics in a knowledge graph (KG) into GANs and leveraging well-learned semantic embeddings for each node (representing a visual category) to synthesize compelling visual features for unseen classes.
Human-centric Transfer Learning Explanation via Knowledge Graph [Extended Abstract]
TLDR
The first one explains the transferability of features learned by Convolutional Neural Network from one domain to another through pre-training and fine-tuning, while the second justifies the model of a target domain predicted by models from multiple source domains in zero-shot learning (ZSL).
Generative Adversarial Zero-Shot Relational Learning for Knowledge Graphs
TLDR
A novel formulation of zero-shot learning is considered, which is model-agnostic that could be potentially applied to any version of KG embeddings, and consistently yields performance improvements on NELL and Wiki dataset.
Multi-label Zero-Shot Learning with Structured Knowledge Graphs
TLDR
A novel deep learning architecture for multi-label zero-shot learning (ML-ZSL), which is able to predict multiple unseen class labels for each input instance, and a framework that incorporates knowledge graphs for describing the relationships between multiple labels is proposed.
Choose Your Neuron: Incorporating Domain Knowledge through Neuron-Importance
TLDR
This work learns to map domain knowledge about novel “unseen” classes onto this dictionary of learned concepts and optimizes for network parameters that can effectively combine these concepts – essentially learning classifiers by discovering and composing learned semantic concepts in deep networks.
Rethinking Knowledge Graph Propagation for Zero-Shot Learning
TLDR
This work proposes a Dense Graph Propagation module with carefully designed direct links among distant nodes to exploit the hierarchical graph structure of the knowledge graph through additional connections and outperforms state-of-the-art zero-shot learning approaches.
Semantic Autoencoder for Zero-Shot Learning
TLDR
This work presents a novel solution to ZSL based on learning a Semantic AutoEncoder (SAE), which outperforms significantly the existing ZSL models with the additional benefit of lower computational cost and beats the state-of-the-art when the SAE is applied to supervised clustering problem.
Zero-Shot Recognition via Semantic Embeddings and Knowledge Graphs
TLDR
This paper builds upon the recently introduced Graph Convolutional Network (GCN) and proposes an approach that uses both semantic embeddings and the categorical relationships to predict the classifiers, and shows that it is robust to noise in the KG.
Learning a Deep Embedding Model for Zero-Shot Learning
  • Li Zhang, T. Xiang, S. Gong
  • Computer Science
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
This paper proposes to use the visual space as the embedding space instead of embedding into a semantic space or an intermediate space, and argues that in this space, the subsequent nearest neighbour search would suffer much less from the hubness problem and thus become more effective.
Long-tail Relation Extraction via Knowledge Graph Embeddings and Graph Convolution Networks
TLDR
This work proposes to leverage implicit relational knowledge among class labels from knowledge graph embeddings and learn explicit relational knowledge using graph convolution networks and integrates that relational knowledge into relation extraction model by coarse-to-fine knowledge-aware attention mechanism.
...
...