Corpus ID: 219176929

Explainable Artificial Intelligence: a Systematic Review

@article{Vilone2020ExplainableAI,
  title={Explainable Artificial Intelligence: a Systematic Review},
  author={Giulia Vilone and Luca Longo},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.00093}
}
Explainable Artificial Intelligence (XAI) has experienced a significant growth over the last few years. This is due to the widespread application of machine learning, particularly deep learning, that has led to the development of highly accurate models but lack explainability and interpretability. A plethora of methods to tackle this problem have been proposed, developed and tested. This systematic review contributes to the body of knowledge by clustering these methods with a hierarchical… Expand
Explanatory Pluralism in Explainable AI
  • Yiheng Yao
  • Computer Science
  • CD-MAKE
  • 2021
TLDR
This paper reduces the ambiguity in use of the word ‘explanation’ in the field of XAI, allowing practitioners and stakeholders a useful template for avoiding equivocation and evaluating XAI methods and putative explanations. Expand
A Checklist for Explainable AI in the Insurance Domain
TLDR
This paper investigates the current usage of AI algorithms in the Dutch insurance industry and the adoption of explainable artificial intelligence (XAI) techniques and designs a checklist for insurance companies to help assure quality standards regarding XAI and a solid foundation for cooperation between organisations. Expand
A Fuzzy Shell for Developing an Interpretable BCI Based on the Spatiotemporal Dynamics of the Evoked Oscillations
TLDR
A general-purpose fuzzy software system shell for developing a custom EEG BCI system that relies on the bursts of the ongoing EEG frequency power synchronization/desynchronization at scalp level and supports quick BCI setup by linguistic features, ad hoc fuzzy membership construction, explainable IF-THEN rules, and the concept of the Internet of Things. Expand
A Highly Transparent and Explainable Artificial Intelligence Tool for Chronic Wound Classification: XAI-CWC
TLDR
The proposed method successfully provides chronic wound classification and its associated explanation and this hybrid approach is shown to aid with the interpretation and understanding of AI decision-making. Expand
A Review on Explainability in Multimodal Deep Neural Nets
TLDR
This paper extensively reviews the present literature to present a comprehensive survey and commentary on the explainability in multimodal deep neural nets, especially for the vision and language tasks. Expand
A Review on Explainability in Multimodal Deep Neural Nets
TLDR
This paper extensively reviews the present literature to present a comprehensive survey and commentary on the explainability in multimodal deep neural nets, especially for the vision and language tasks. Expand
A Step Towards Explainable Person Re-identification Rankings
More and more video and image data is available to security authorities that can help solve crimes. Since manual analysis is time-consuming, algorithms are needed that support e.g. re-identificationExpand
Accelerated evolutionary induction of heterogeneous decision trees for gene expression-based classification
TLDR
This work aims to combine evolutionary induced DT with a recently developed concept designed directly for gene expression data, called Relative eXpression Analysis (RXA), which uses both classical univariate and bivariate tests that focus on the relative ordering and weight relationships between the genes in the splitting nodes. Expand
Autoencoder-based anomaly root cause analysis for wind turbines
TLDR
This paper uses ARCANA to identify the possible root causes of anomalies detected by an autoencoder, and describes the process of reconstruction as an optimisation problem that aims to remove anomalous properties from an anomaly considerably. Expand
Believe The HiPe: Hierarchical Perturbation for Fast and Robust Explanation of Black Box Models
TLDR
Hierarchical Perturbation is proposed, a very fast and completely model-agnostic method for explaining model predictions with robust saliency maps that are of competitive or superior quality to those generated by existing black-box methods. Expand
...
1
2
3
4
...

References

SHOWING 1-10 OF 380 REFERENCES
Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models
TLDR
Two approaches to explaining predictions of deep learning models are presented, one method which computes the sensitivity of the prediction with respect to changes in the input and one approach which meaningfully decomposes the decision in terms of the input variables. Expand
An Integrative 3C evaluation framework for Explainable Artificial Intelligence
TLDR
An integrated framework with three evaluation criteria (correlation, completeness, and complexity) to evaluate XAI is proposed and it is found the rule extraction method is the most advanced and promising method among current XAI. Expand
Explainable artificial intelligence: A survey
TLDR
Recent developments in XAI in supervised learning are summarized, a discussion on its connection with artificial general intelligence is started, and proposals for further research directions are given. Expand
Designing Explainability of an Artificial Intelligence System
TLDR
A research framework for designing causal explainability of an AI system is provided and based on the attribution results, users will perceive the system as human-like and which will be a motivation of anthropomorphism. Expand
Towards Explainable Artificial Intelligence
TLDR
This introductory paper presents recent developments and applications in deep learning, and makes a plea for a wider use of explainable learning algorithms in practice. Expand
Explainable AI: A Brief Survey on History, Research Areas, Approaches and Challenges
TLDR
The history of Explainable AI is introduced, starting from expert systems and traditional machine learning approaches to the latest progress in the context of modern deep learning, and the major research areas and the state-of-art approaches in recent years are described. Expand
Visual Analytics for Explainable Deep Learning
  • J. Choo, Shixia Liu
  • Computer Science, Mathematics
  • IEEE Computer Graphics and Applications
  • 2018
Recently, deep learning has been advancing the state of the art in artificial intelligence to a new level, and humans rely on artificial intelligence techniques more than ever. However, even withExpand
Explainable Artificial Intelligence via Bayesian Teaching
Modern machine learning methods are increasingly powerful and opaque. This opaqueness is a concern across a variety of domains in which algorithms are making important decisions that should beExpand
Towards Dependable and Explainable Machine Learning Using Automated Reasoning
TLDR
A novel automated reasoning based approach that can extract valuable insights from classification and prediction models obtained via machine learning, so that the user can understand the reason behind the decision-making of machine learning models. Expand
Explainable Artificial Intelligence Applications in NLP, Biomedical, and Malware Classification: A Literature Review
TLDR
LIME attempts to make these complex models at least partly understandable by evaluating using three classification tasks that make use of LIME (Local Interpretable Model-Agnostic Explanations) to explain predictions of deep learning models. Expand
...
1
2
3
4
5
...