Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

@article{Arrieta2020ExplainableAI,
  title={Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI},
  author={Alejandro Barredo Arrieta and Natalia D{\'i}az Rodr{\'i}guez and Javier Del Ser and Adrien Bennetot and Siham Tabik and A. Barbado and Salvador Garc{\'i}a and Sergio Gil-Lopez and Daniel Molina and Richard Benjamins and Raja Chatila and Francisco Herrera},
  journal={Inf. Fusion},
  year={2020},
  volume={58},
  pages={82-115}
}
Explainable Artificial Intelligence (XAI): An Engineering Perspective
The remarkable advancements in Deep Learning (DL) algorithms have fueled enthusiasm for using Artificial Intelligence (AI) technologies in almost every domain; however, the opaqueness of these
Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and Future Opportunities
TLDR
A systematic meta-survey for challenges and future research directions in XAI organized in two themes based on machine learning life cycle’s phases: design, development, and deployment is presented.
Explainable AI: A Review of Machine Learning Interpretability Methods
TLDR
This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
Reviewing the Need for Explainable Artificial Intelligence (xAI)
TLDR
A systematic review of xAI literature on the topic identifies four thematic debates central to how xAI addresses the black-box problem and synthesizes the findings into a future research agenda to further the xAI body of knowledge.
A multi-component framework for the analysis and design of explainable artificial intelligence
TLDR
A strategic inventory of XAI requirements is provided, their connection to a history ofXAI ideas are demonstrated, and those ideas are synthesized into a simple framework to calibrate five successive levels of X AI.
Explainable Artificial Intelligence Approaches: A Survey
TLDR
This work demonstrates popular XAI methods with a mutual case study/task, provides meaningful insight on quantifying explainability, and recommends paths towards responsible or human-centered AI using XAI as a medium to understand, compare, and correlate competitive advantages of popularXAI methods.
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
TLDR
A taxonomy and categorizing the XAI techniques based on their scope of explanations, methodology behind the algorithms, and explanation level or usage which helps build trustworthy, interpretable, and self-explanatory deep learning models is proposed.
A historical perspective of explainable Artificial Intelligence
TLDR
A historical perspective of explainability in AI is presented and criteria for explanations are proposed that are believed to play a crucial role in the development of human‐understandable explainable systems.
Expectation: Personalized Explainable Artificial Intelligence for Decentralized Agents with Heterogeneous Knowledge
TLDR
This manuscript presents the overall objectives and approach of the Expectation project, focusing on the theoretical and practical advance of the state of the art of XAI towards the construction of personalised explanations in spite of decentralisation and heterogeneity of knowledge, agents, and explainees.
...
...

References

SHOWING 1-10 OF 409 REFERENCES
A Survey on Explainable Artificial Intelligence (XAI): Toward Medical XAI
TLDR
A review on interpretabilities suggested by different research works and categorize them is provided, hoping that insight into interpretability will be born with more considerations for medical practices and initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.
Explainable artificial intelligence: A survey
TLDR
Recent developments in XAI in supervised learning are summarized, a discussion on its connection with artificial general intelligence is started, and proposals for further research directions are given.
Reconciling deep learning with symbolic artificial intelligence: representing objects and relations
Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models
TLDR
Two approaches to explaining predictions of deep learning models are presented, one method which computes the sensitivity of the prediction with respect to changes in the input and one approach which meaningfully decomposes the decision in terms of the input variables.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
TLDR
This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.
Explainable AI: The New 42?
TLDR
Explainable AI is not a new field but the evolution of formal reasoning architectures to incorporate principled probabilistic reasoning helped address the capture and use of uncertain knowledge.
Towards Explainable Neural-Symbolic Visual Reasoning
TLDR
Why techniques integrating connectionist and symbolic paradigms are the most efficient solutions to produce explanations for non-technical users and a reasoning model, based on definitions by Doran et al. (2017), is proposed to explain a neural network's decision.
The Twin-System Approach as One Generic Solution for XAI: An Overview of ANN-CBR Twins for Explaining Deep Learning
TLDR
This short paper overviews very recent work that advances a generic solution to the XAI problem, the so called twin system approach and outlines how recent work reviving this idea has applied it to deep learning methods.
Neural-Symbolic Computing: An Effective Methodology for Principled Integration of Machine Learning and Reasoning
TLDR
Recent accomplishments of neural-symbolic computing as a principled methodology for integrated machine learning and reasoning are surveyed and the insights provided shed new light on the increasingly prominent need for interpretable and accountable AI systems.
The truth will come to light: directions and challenges in extracting the knowledge embedded within trained artificial neural networks
TLDR
This paper shows that not only is the ADT taxonomy applicable to a cross section of current techniques for extracting rules from trained feedforward ANN's but also how the taxonomy can be adapted and extended to embrace a broader range of ANN types and explanation structures.
...
...