Explainable Artificial Intelligence: An Updated Perspective

  title={Explainable Artificial Intelligence: An Updated Perspective},
  author={Agneza Krajna and M Kovac and Mario Br{\vc}i{\vc} and Ana {\vS}ar{\vc}evi{\'c}},
  journal={2022 45th Jubilee International Convention on Information, Communication and Electronic Technology (MIPRO)},
  • Agneza Krajna, M. Kovac, A. Šarčević
  • Published 23 May 2022
  • Computer Science
  • 2022 45th Jubilee International Convention on Information, Communication and Electronic Technology (MIPRO)
Artificial intelligence has become mainstream and its applications will only proliferate. Specific measures must be done to integrate such systems into society for the general benefit. One of the tools for improving that is explainability which boosts trust and understanding of decisions between humans and machines. This research offers an update on the current state of explainable AI (XAI). Recent XAI surveys in supervised learning show convergence of main conceptual ideas. We list the… 
1 Citations
Exact solving scheduling problems accelerated by graph neural networks
This paper applies the graph convolutional neural network from the literature on speeding up general branch&bound solver by learning its branching decisions to the augmented solver, and discusses the interesting question of how much the authors can accelerate solving NP-hard problems in the light of the known limits and impossibility results in AI.


Explainable artificial intelligence: A survey
Recent developments in XAI in supervised learning are summarized, a discussion on its connection with artificial general intelligence is started, and proposals for further research directions are given.
Explainability in reinforcement learning: perspective and position
This position paper attempts to give a systematic overview of existing methods in the explainable RL area and propose a novel unified taxonomy, building and expanding on the existing ones.
Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey
This paper presents an overview of existing explainable AI (XAI) methods applied on time series and illustrates the type of explanations they produce and provides a reflection on the impact of these explanation methods to provide confidence and trust in the AI systems.
Explainable Artificial Intelligence Approaches: A Survey
This work demonstrates popular XAI methods with a mutual case study/task, provides meaningful insight on quantifying explainability, and recommends paths towards responsible or human-centered AI using XAI as a medium to understand, compare, and correlate competitive advantages of popularXAI methods.
Explainable AI: A Review of Machine Learning Interpretability Methods
This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
A taxonomy and categorizing the XAI techniques based on their scope of explanations, methodology behind the algorithms, and explanation level or usage which helps build trustworthy, interpretable, and self-explanatory deep learning models is proposed.
XGNN: Towards Model-Level Explanations of Graph Neural Networks
This work proposes a novel approach, known as XGNN, to interpret GNNs at the model-level by training a graph generator so that the generated graph patterns maximize a certain prediction of the model.
Unmasking Clever Hans predictors and assessing what machines really learn
The authors investigate how these methods approach learning in order to assess the dependability of their decision making and propose a semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines.