Combining Sub-Symbolic and Symbolic Methods for Explainability

@article{Himmelhuber2021CombiningSA,
  title={Combining Sub-Symbolic and Symbolic Methods for Explainability},
  author={Anna Himmelhuber and Stephan Grimm and Sonja Zillner and Mitchell Joblin and Martin Ringsquandl and Thomas A. Runkler},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.01844}
}
Similarly to other connectionist models, Graph Neural Networks (GNNs) lack transparency in their decision-making. A number of sub-symbolic approaches have been developed to provide insights into the GNN decision making process. These are first important steps on the way to explainability, but the generated explanations are often hard to understand for users that are not AI experts. To overcome this problem, we introduce a conceptual approach combining sub-symbolic and symbolic methods for human… 

References

SHOWING 1-10 OF 24 REFERENCES
Symbolic Vs Sub-symbolic AI Methods: Friends or Enemies?
TLDR
This work provides a comprehensive overview of the symbolic, sub-symbolic and in-between approaches focused in the domain of knowledge graphs, namely, schema representation, schema matching, knowledge graph completion, link prediction, entity resolution, entity classification and triple classification.
Foundations of Explainable Knowledge-Enabled Systems
TLDR
This work presents a historical overview of explainable artificial intelligence systems, with a focus on knowledge-enabled systems, spanning the expert systems, cognitive assistants, semantic applications, and machine learning domains.
Semantic Web Technologies for Explainable Machine Learning Models: A Literature Review
TLDR
This work presents current approaches of combining Machine Learning with Semantic Web Technologies in the context of model explainability based on a systematic literature review and suggests directions for further research on combining Semantic web Technologies with Machine Learning.
Knowledge-based Transfer Learning Explanation
TLDR
Three kinds of knowledge-based explanatory evidence, with different granularities, including general factors, particular narrators and core contexts are first proposed and then inferred with both local ontologies and external knowledge bases for human-centric explanation of transfer learning.
GNNExplainer: Generating Explanations for Graph Neural Networks
TLDR
GnExplainer is proposed, the first general, model-agnostic approach for providing interpretable explanations for predictions of any GNN-based model on any graph-based machine learning task.
Explainability Methods for Graph Convolutional Neural Networks
TLDR
This paper develops the graph analogues of three prominent explainability methods for convolutional neural networks: contrastive gradient-based (CG) saliency maps, Class Activation Mapping (CAM), and Excitation Back-Propagation (EB) and their variants, gradient-weighted CAM (Grad-CAM) and contrastive EB (c-EB).
Graph Attention Networks
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior
Explaining Trained Neural Networks with Semantic Web Technologies: First Steps
TLDR
This paper provides a conceptual approach that leverages publicly available structured data in order to explain the input-output behavior of trained artificial neural networks and applies existing Semantic Web technologies to provide an experimental proof of concept.
...
...