Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

  title={Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)},
  author={Amina Adadi and Mohammed Berrada},
  journal={IEEE Access},
At the dawn of the fourth industrial revolution, we are witnessing a fast and widespread adoption of artificial intelligence (AI) in our daily life, which contributes to accelerating the shift towards a more algorithmic society. However, even with such unprecedented advancements, a key impediment to the use of AI-based systems is that they often lack transparency. Indeed, the black-box nature of these systems allows powerful predictions, but it cannot be directly explained. This issue has… 

Figures and Tables from this paper

Explainable Artificial Intelligence (XAI): An Engineering Perspective
The remarkable advancements in Deep Learning (DL) algorithms have fueled enthusiasm for using Artificial Intelligence (AI) technologies in almost every domain; however, the opaqueness of these
Explainable Artificial Intelligence Approaches: A Survey
This work demonstrates popular XAI methods with a mutual case study/task, provides meaningful insight on quantifying explainability, and recommends paths towards responsible or human-centered AI using XAI as a medium to understand, compare, and correlate competitive advantages of popularXAI methods.
Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and Future Opportunities
A systematic meta-survey for challenges and future research directions in XAI organized in two themes based on machine learning life cycle’s phases: design, development, and deployment is presented.
Explainable Artificial Intelligence (XAI) for Internet of Things: A Survey
An in-depth and systematic review of recent studies using XAI models in the scope of IoT domain and categorizes the studies according to their methodology and applications areas to focus on the challenging problems and open issues.
The intersection of evolutionary computation and explainable AI
It is suggested that the EC community may play a major role in the achievement of XAI, and there are still several research opportunities and open research questions that may promote a safer and broader adoption of EC in real-world applications.
SeXAI: Introducing Concepts into Black Boxes for Explainable Artificial Intelligence
This paper presents the first version of SeXAI, a semantic-based explainable framework aiming to exploit semantic information for making black boxes more transparent and shows its application to a real-world use case.
Provenance documentation to enable explainable and trustworthy AI: A literature review
This paper conducts a systematic literature review of provenance, XAI, and trustworthy AI (TAI) to explain the fundamental concepts and illustrate the potential of using provenance as a medium to help accomplish explainability in AI-based systems.
Trustworthy AI: From Principles to Practices
This review provides AI practitioners with a comprehensive guide for building trustworthy AI systems and introduces the theoretical framework of important aspects of AI trustworthiness, including robustness, generalization, explainability, transparency, reproducibility, fairness, privacy preservation, and accountability.
Explainable information retrieval using deep learning for medical images
The proposed deep learning model for image classification is compared with state-of-the-art methods including Logistic Regression, Support Vector Machine, Artificial Neural Network and Random Forest and has reported F1 score of 0.76 on the real world streaming dataset which is comparatively better than traditional methods.


Local Rule-Based Explanations of Black Box Decision Systems
This paper proposes LORE, an agnostic method able to provide interpretable and faithful explanations for black box outcome explanation, and shows that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box.
A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy
Artificial general intelligence (AGI) is AI that can reason across a wide range of domains. It has long been considered the “grand dream” or “holy grail” of AI. It also poses major issues of ethics,
Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR
It is suggested data controllers should offer a particular type of explanation, unconditional counterfactual explanations, to support these three aims, which describe the smallest change to the world that can be made to obtain a desirable outcome, or to arrive at the closest possible world, without needing to explain the internal logic of the system.
Explainable AI: Beware of Inmates Running the Asylum Or: How I Learnt to Stop Worrying and Love the Social and Behavioural Sciences
From a light scan of literature, it is demonstrated that there is considerable scope to infuse more results from the social and behavioural sciences into explainable AI, and some key results from these fields that are relevant to explainableAI are presented.
Explainable artificial intelligence: A survey
Recent developments in XAI in supervised learning are summarized, a discussion on its connection with artificial general intelligence is started, and proposals for further research directions are given.
Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning
The definition of explainability is provided and how it can be used to classify existing literature is shown and discussed to create best practices and identify open challenges in explanatory artificial intelligence.
A Survey of Methods for Explaining Black Box Models
A classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system is provided to help the researcher to find the proposals more useful for his own work.
Explainable AI for Designers: A Human-Centered Perspective on Mixed-Initiative Co-Creation
This vision paper proposes a new research area of eXplainable AI for Designers (XAID), specifically for game designers, and illustrates the initial XAID framework through three use cases, which require an understanding both of the innate properties of the AI techniques and users’ needs.
MAGIX: Model Agnostic Globally Interpretable Explanations
The approach works by first extracting conditions that were important at the instance level and then evolving rules through a genetic algorithm with an appropriate fitness function that represent the patterns followed by the model for decisioning and are useful for understanding its behavior.
What do we need to build explainable AI systems for the medical domain?
It is argued that research in explainable-AI would generally help to facilitate the implementation of AI/ML in the medical domain, and specifically help to facilitates transparency and trust.