Directions for Explainable Knowledge-Enabled Systems

@inproceedings{Chari2020DirectionsFE,
  title={Directions for Explainable Knowledge-Enabled Systems},
  author={Shruthi Chari and Daniel Gruen and Oshani Wasana Seneviratne and Deborah L. McGuinness},
  booktitle={Knowledge Graphs for eXplainable Artificial Intelligence},
  year={2020}
}
Interest in the field of Explainable Artificial Intelligence has been growing for decades and has accelerated recently. As Artificial Intelligence models have become more complex, and often more opaque, with the incorporation of complex machine learning techniques, explainability has become more critical. Recently, researchers have been investigating and tackling explainability with a user-centric focus, looking for explanations to consider trustworthiness, comprehensibility, explicit… 

Figures from this paper

Explainable Machine Learning with Prior Knowledge: An Overview
TLDR
This survey presents an overview of integrating prior knowledge into machine learning systems in order to improve explainability and presents a categorization of current research into three main categories which either integrate knowledge into the machine learning pipeline, into the explainability method or derive knowledge from explanations.
Explanation Ontology: A Model of Explanations for User-Centered AI
TLDR
An explanation ontology is designed to model both the role of explanations, accounting for the system and user attributes in the process, and the range of different literature-derived explanation types, to help system designers make informed choices on which explanations AI systems can and should provide.
Towards Multi-Grained Explainability for Graph Neural Networks
TLDR
This work exploits the pre-training and fine-tuning idea to develop their explainer and generate multi-grained explanations, and shows the superiority of the explainer, in terms of AUC on explaining graph classification over the leading baselines.
Integrating knowledge graphs for explainable artificial intelligence in biomedicine
TLDR
This work proposes an approach to build a KG for personalized medicine to serve as a rich input for the AI system and incorporate its outcomes to support explanations, by connecting input and output (post-hoc).
Counterfactual Explanations as Interventions in Latent Space
TLDR
This paper presents Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations capturing by design the underlying causal relations from the data, and at the same time to provide feasible recommendations to reach the proposed profile.
A Survey on Interpretable Reinforcement Learning
TLDR
This survey provides an overview of various approaches to achieve higher interpretability in reinforcement learning and argues that interpretable RL may embrace different facets: interpretable inputs, interpretable (transition/reward) models, and interpretable decision-making.
A Survey on Visual Transfer Learning using Knowledge Graphs
TLDR
A broad overview of knowledge graph embedding methods is provided and several joint training objectives suitable to combine them with high dimensional visual embeddings are described, to help researchers find meaningful evaluation benchmarks.
Matching Multiple Ontologies to Build a Knowledge Graph for Personalized Medicine
TLDR
A novel holistic ontology alignment strategy building on AgreementMakerLight that clusters ontologies based on their semantic overlap measured by fast matching techniques with a high degree of confidence, and then applies more sophisticated matching techniques within each cluster.
Making Deep Learning-Based Predictions for Credit Scoring Explainable
TLDR
An explainable deep learning model for credit scoring is proposed which can harness the performance benefits offered by deep learning and yet comply with the legislation requirements for the automated decision-making processes.
...
...

References

SHOWING 1-10 OF 59 REFERENCES
Why these Explanations? Selecting Intelligibility Types for Explanation Goals
TLDR
A recently developed conceptual framework for user-centric reasoned XAI that draws from foundational concepts in philosophy, cognitive psychology, and AI is leveraged to identify pathways for how user reasoning drives XAI needs.
Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning
TLDR
The definition of explainability is provided and how it can be used to classify existing literature is shown and discussed to create best practices and identify open challenges in explanatory artificial intelligence.
Explaining Explanations in AI
TLDR
This work contrasts the different schools of thought on what makes an explanation in philosophy and sociology, and suggests that machine learning might benefit from viewing the problem more broadly.
Why and why not explanations improve the intelligibility of context-aware intelligent systems
TLDR
It is shown that explanations describing why the system behaved a certain way resulted in better understanding and stronger feelings of trust, and automatically providing explanations about a system's decision process can help mitigate this problem.
Evaluating Explanations
TLDR
A theory of how context, the explainer's current knowledge, and his needs for specific information affect evaluation of explanations is presented, and its implementation is described in ACCEPTER, a program to evaluate explanations for anomalies detected during story understanding.
The Use and Effects of Knowledge-Based System Explanations: Theoretical Foundations and a Framework for Empirical Evaluation
TLDR
The role of KBS explanations is discussed to provide an understanding of both the specific factors that influence explanation use and the consequences of such use.
Dedalo: Looking for Clusters Explanations in a Labyrinth of Linked Data
TLDR
Dedalo is a framework that dynamically traverses Linked Data to find commonalities that form explanations for items of a cluster, and developed different strategies (or heuristics) to guide this traversal, reducing the time to get the best explanation.
The Pragmatic Turn in Explainable Artificial Intelligence (XAI)
TLDR
It is concluded that interpretative or approximation models not only provide the best way to achieve the objectual understanding of a machine learning model, but are also a necessary condition to achieve post hoc interpretability.
Explainable Artificial Intelligence (XAI)
  • M. Ridley
  • Computer Science
    Information Technology and Libraries
  • 2022
The field of explainable artificial intelligence (XAI) advances techniques, processes, and strategies that provide explanations for the predictions, recommendations, and decisions of opaque and
...
...