• Corpus ID: 219176929

Explainable Artificial Intelligence: a Systematic Review

@article{Vilone2020ExplainableAI,
  title={Explainable Artificial Intelligence: a Systematic Review},
  author={Giulia Vilone and Luca Longo},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.00093}
}
Explainable Artificial Intelligence (XAI) has experienced a significant growth over the last few years. This is due to the widespread application of machine learning, particularly deep learning, that has led to the development of highly accurate models but lack explainability and interpretability. A plethora of methods to tackle this problem have been proposed, developed and tested. This systematic review contributes to the body of knowledge by clustering these methods with a hierarchical… 
Deep learning in electron microscopy
TLDR
This review paper offers a practical perspective aimed at developers with limited familiarity of deep learning in electron microscopy that discusses hardware and software needed to get started with deep learning and interface with electron microscopes.
The Role of Human Knowledge in Explainable AI
TLDR
This article aims to present a literature overview on collecting and employing human knowledge to improve and evaluate the understandability of machine learning models through human-in-the-loop approaches.
A Checklist for Explainable AI in the Insurance Domain
TLDR
This paper investigates the current usage of AI algorithms in the Dutch insurance industry and the adoption of explainable artificial intelligence (XAI) techniques and designs a checklist for insurance companies to help assure quality standards regarding XAI and a solid foundation for cooperation between organisations.
Explanatory Pluralism in Explainable AI
TLDR
This paper reduces the ambiguity in use of the word ‘explanation’ in the field of XAI, allowing practitioners and stakeholders a useful template for avoiding equivocation and evaluating XAI methods and putative explanations.
A Fuzzy Shell for Developing an Interpretable BCI Based on the Spatiotemporal Dynamics of the Evoked Oscillations
TLDR
A general-purpose fuzzy software system shell for developing a custom EEG BCI system that relies on the bursts of the ongoing EEG frequency power synchronization/desynchronization at scalp level and supports quick BCI setup by linguistic features, ad hoc fuzzy membership construction, explainable IF-THEN rules, and the concept of the Internet of Things.
A Systematic Review of Explainable Artificial Intelligence in Terms of Different Application Domains and Tasks
TLDR
XAI methods are mostly developed for safety-critical domains worldwide, deep learning and ensemble models are being exploited more than other types of AI/ML models, visual explanations are more acceptable to end-users and robust evaluation metrics are being developed to assess the quality of explanations.
State-of-the-Art Explainability Methods with Focus on Visual Analytics Showcased by Glioma Classification
TLDR
A comparison of 11 identified Python libraries that provide an addition to the better known SHAP and LIME libraries for visualizing explainability and interpretability of the output of their AI model is presented.
Evaluating explainable artificial intelligence (XAI): algorithmic explanations for transparency and trustworthiness of ML algorithms and AI systems
TLDR
This paper investigates XAI for algorithmic trustworthiness and transparency using some example use cases and by using SHAP (SHapley Additive exPlanations) library and visualizing the effect of features individually and cumulatively in the prediction process.
Automating the Design and Development of Gradient Descent Trained Expert System Networks
  • J. Straub
  • Computer Science
    Knowledge-Based Systems
  • 2022
TLDR
This paper proposes the use of larger and denser-than-application need rule-fact networks which are trained, pruned, manually reviewed and then re-trained for use, demonstrating the efficacy of this technique for many applications.
...
...

References

SHOWING 1-10 OF 380 REFERENCES
Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models
TLDR
Two approaches to explaining predictions of deep learning models are presented, one method which computes the sensitivity of the prediction with respect to changes in the input and one approach which meaningfully decomposes the decision in terms of the input variables.
Explainable artificial intelligence: A survey
TLDR
Recent developments in XAI in supervised learning are summarized, a discussion on its connection with artificial general intelligence is started, and proposals for further research directions are given.
An Integrative 3C evaluation framework for Explainable Artificial Intelligence
TLDR
An integrated framework with three evaluation criteria (correlation, completeness, and complexity) to evaluate XAI is proposed and it is found the rule extraction method is the most advanced and promising method among current XAI.
Towards Explainable Artificial Intelligence
TLDR
This introductory paper presents recent developments and applications in deep learning, and makes a plea for a wider use of explainable learning algorithms in practice.
Designing Explainability of an Artificial Intelligence System
TLDR
A research framework for designing causal explainability of an AI system is provided and based on the attribution results, users will perceive the system as human-like and which will be a motivation of anthropomorphism.
Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges
TLDR
The history of Explainable AI is introduced, starting from expert systems and traditional machine learning approaches to the latest progress in the context of modern deep learning, and the major research areas and the state-of-art approaches in recent years are described.
Visual Analytics for Explainable Deep Learning
Recently, deep learning has been advancing the state of the art in artificial intelligence to a new level, and humans rely on artificial intelligence techniques more than ever. However, even with
Explainable Artificial Intelligence via Bayesian Teaching
TLDR
This work proposes an explanation-byexamples approach that builds on the recent research in Bayesian teaching in which a small subset of the data is selected that would lead the learner to similar conclusions as the entire dataset.
Towards Dependable and Explainable Machine Learning Using Automated Reasoning
TLDR
A novel automated reasoning based approach that can extract valuable insights from classification and prediction models obtained via machine learning, so that the user can understand the reason behind the decision-making of machine learning models.
Explainable Artificial Intelligence Applications in NLP, Biomedical, and Malware Classification: A Literature Review
TLDR
LIME attempts to make these complex models at least partly understandable by evaluating using three classification tasks that make use of LIME (Local Interpretable Model-Agnostic Explanations) to explain predictions of deep learning models.
...
...