Combining Sub-Symbolic and Symbolic Methods for Explainability
@article{Himmelhuber2021CombiningSA, title={Combining Sub-Symbolic and Symbolic Methods for Explainability}, author={Anna Himmelhuber and Stephan Grimm and Sonja Zillner and Mitchell Joblin and Martin Ringsquandl and Thomas A. Runkler}, journal={ArXiv}, year={2021}, volume={abs/2112.01844} }
Similarly to other connectionist models, Graph Neural Networks (GNNs) lack transparency in their decision-making. A number of sub-symbolic approaches have been developed to provide insights into the GNN decision making process. These are first important steps on the way to explainability, but the generated explanations are often hard to understand for users that are not AI experts. To overcome this problem, we introduce a conceptual approach combining sub-symbolic and symbolic methods for human…
References
SHOWING 1-10 OF 24 REFERENCES
Semantic Web Technologies for Explainable Machine Learning Models: A Literature Review
- Computer SciencePROFILES/SEMEX@ISWC
- 2019
This work presents current approaches of combining Machine Learning with Semantic Web Technologies in the context of model explainability based on a systematic literature review and suggests directions for further research on combining Semantic web Technologies with Machine Learning.
Knowledge-based Transfer Learning Explanation
- Computer ScienceKR
- 2018
Three kinds of knowledge-based explanatory evidence, with different granularities, including general factors, particular narrators and core contexts are first proposed and then inferred with both local ontologies and external knowledge bases for human-centric explanation of transfer learning.
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
- Computer ScienceInf. Fusion
- 2020
Explainability Methods for Graph Convolutional Neural Networks
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
This paper develops the graph analogues of three prominent explainability methods for convolutional neural networks: contrastive gradient-based (CG) saliency maps, Class Activation Mapping (CAM), and Excitation Back-Propagation (EB) and their variants, gradient-weighted CAM (Grad-CAM) and contrastive EB (c-EB).
Graph Attention Networks
- Computer ScienceICLR
- 2018
We present graph attention networks (GATs), novel neural network architectures that operate on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior…
Explaining Trained Neural Networks with Semantic Web Technologies: First Steps
- Computer ScienceNeSy
- 2017
This paper provides a conceptual approach that leverages publicly available structured data in order to explain the input-output behavior of trained artificial neural networks and applies existing Semantic Web technologies to provide an experimental proof of concept.
Neurosymbolic AI: The 3rd Wave
- Computer ScienceArXiv
- 2020
The insights provided by 20 years of neural-symbolic computing are shown to shed new light onto the increasingly prominent role of trust, safety, interpretability and accountability of AI.
Choose Your Neuron: Incorporating Domain Knowledge through Neuron-Importance
- Computer ScienceECCV
- 2018
This work learns to map domain knowledge about novel “unseen” classes onto this dictionary of learned concepts and optimizes for network parameters that can effectively combine these concepts – essentially learning classifiers by discovering and composing learned semantic concepts in deep networks.
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
- Computer ScienceHLT-NAACL Demos
- 2016
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.