Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI

@article{Arrieta2020ExplainableAI,
  title={Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI},
  author={Alejandro Barredo Arrieta and Natalia D{\'i}az Rodr{\'i}guez and Javier Del Ser and Adrien Bennetot and Siham Tabik and A. Barbado and Salvador Garc{\'i}a and Sergio Gil-Lopez and Daniel Molina and Richard Benjamins and Raja Chatila and Francisco Herrera},
  journal={Inf. Fusion},
  year={2020},
  volume={58},
  pages={82-115}
}
Explanatory machine learning for sequential human teaching
TLDR
Empirical results show that sequential teaching of concepts with increasing complexity has a beneficial effect on human comprehension and leads to human re-discovery of divide-and-conquer problem-solving strategies, and studying machine-learned explanations allows adaptations of human problem-Solving strategy with better performance.
Symbolic AI for XAI: Evaluating LFIT Inductive Programming for Explaining Biases in Machine Learning
TLDR
The present work takes a first step to a general methodology to incorporate accurate declarative explanations to classic machine learning by checking the viability of LFIT in a specific AI application scenario: fair recruitment based on an automatic tool generated with machine learning methods for ranking Curricula Vitae that incorporates soft biometric information (gender and ethnicity).
Anomaly detection in average fuel consumption with XAI techniques for dynamic generation of explanations
In this paper we show a complete process for unsupervised anomaly detection for the average fuel consumption of fleet vehicles that is able to explain what variables are affecting the consumption in
Explainable Intrusion Detection Systems (X-IDS): A Survey of Current Methods, Challenges, and Opportunities
TLDR
This survey reviews the state of the art in explainable AI (XAI) for IDS, its current challenges, and discusses how these challenges span to the design of an X-IDS, and proposes a generic architecture that considers human-in-the-loop which can be used as a guideline when designing anX-IDS.
Towards FAIR Explainable AI: a standardized ontology for mapping XAI solutions to use cases, explanations, and AI systems
TLDR
The ASCENT (Ai System use Case Explanation oNTology) framework is designed, which is a new ontology and corresponding metadata standard with three complementary modules for different aspects of an XAI solution: one for aspects of AI systems, another for use case aspects, and yet another for explanation properties.
The Role of Human Knowledge in Explainable AI
TLDR
This article aims to present a literature overview on collecting and employing human knowledge to improve and evaluate the understandability of machine learning models through human-in-the-loop approaches.
Uniting Machine Intelligence, Brain and Behavioural Sciences to Assist Criminal Justice
I discuss here three important roles where machine intelligence, brain and behaviour studies together may facilitate criminal law. First, brain imaging analysis and predictive modelling using brain
A taxonomy of explanations to support Explainability-by-Design
TLDR
This paper presents a taxonomy of explanations that was developed as part of a holistic ‘Explainability-by-Design’ approach for the purposes of the project PLEAD and is used as a stand-alone classifier of explanations conceived as detective controls, in order to aid supportive automated compliance strategies.
Towards Explainable Social Agent Authoring tools: A case study on FAtiMA-Toolkit
TLDR
This paper examines whether an authoring tool, FAtiMA-Toolkit, is understandable and its authoring steps interpretable, from the point-of-view of the author, and provides a set of key concepts and possible solutions that can guide developers to build such tools.
...
...

References

SHOWING 1-10 OF 456 REFERENCES
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
TLDR
LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.
Explainable Artificial Intelligence (XAI)
  • M. Ridley
  • Computer Science
    Information Technology and Libraries
  • 2022
The field of explainable artificial intelligence (XAI) advances techniques, processes, and strategies that provide explanations for the predictions, recommendations, and decisions of opaque and
A Survey of Methods for Explaining Black Box Models
TLDR
A classification of the main problems addressed in the literature with respect to the notion of explanation and the type of black box system is provided to help the researcher to find the proposals more useful for his own work.
Deep k-Nearest Neighbors: Towards Confident, Interpretable and Robust Deep Learning
TLDR
The DkNN algorithm is evaluated on several datasets, and it is shown the confidence estimates accurately identify inputs outside the model, and that the explanations provided by nearest neighbors are intuitive and useful in understanding model failures.
A Unified Approach to Interpreting Model Predictions
TLDR
A unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations), which unifies six existing methods and presents new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.
European Union Regulations on Algorithmic Decision-Making and a "Right to Explanation"
TLDR
It is argued that while this law will pose large challenges for industry, it highlights opportunities for computer scientists to take the lead in designing algorithms and evaluation frameworks which avoid discrimination and enable explanation.
The Mythos of Model Interpretability
TLDR
This research presents a meta-modelling architecture that automates the very labor-intensive and therefore time-heavy and expensive and therefore expensive and expensive process of training and deploying supervised machine-learning models.
Conditional Random Fields as Recurrent Neural Networks
TLDR
A new form of convolutional neural network that combines the strengths of Convolutional Neural Networks (CNNs) and Conditional Random Fields (CRFs)-based probabilistic graphical modelling is introduced, and top results are obtained on the challenging Pascal VOC 2012 segmentation benchmark.
‘W’
  • P. Alam
  • Composites Engineering: An A–Z Guide
  • 2021
Z.
  • R. Carlton
  • Industrial and Labor Relations Terms
  • 1904
...
...