Explainable Artificial Intelligence: An Updated Perspective
@article{Krajna2022ExplainableAI, title={Explainable Artificial Intelligence: An Updated Perspective}, author={Agneza Krajna and M Kovac and Mario Br{\vc}i{\vc} and Ana {\vS}ar{\vc}evi{\'c}}, journal={2022 45th Jubilee International Convention on Information, Communication and Electronic Technology (MIPRO)}, year={2022}, pages={859-864} }
Artificial intelligence has become mainstream and its applications will only proliferate. Specific measures must be done to integrate such systems into society for the general benefit. One of the tools for improving that is explainability which boosts trust and understanding of decisions between humans and machines. This research offers an update on the current state of explainable AI (XAI). Recent XAI surveys in supervised learning show convergence of main conceptual ideas. We list the…
One Citation
Exact solving scheduling problems accelerated by graph neural networks
- Business, Computer Science2022 45th Jubilee International Convention on Information, Communication and Electronic Technology (MIPRO)
- 2022
This paper applies the graph convolutional neural network from the literature on speeding up general branch&bound solver by learning its branching decisions to the augmented solver, and discusses the interesting question of how much the authors can accelerate solving NP-hard problems in the light of the known limits and impossibility results in AI.
References
SHOWING 1-10 OF 50 REFERENCES
Explainable artificial intelligence: A survey
- Computer Science2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO)
- 2018
Recent developments in XAI in supervised learning are summarized, a discussion on its connection with artificial general intelligence is started, and proposals for further research directions are given.
Explainability in reinforcement learning: perspective and position
- Computer ScienceArXiv
- 2022
This position paper attempts to give a systematic overview of existing methods in the explainable RL area and propose a novel unified taxonomy, building and expanding on the existing ones.
Notions of explainability and evaluation approaches for explainable artificial intelligence
- Computer ScienceInf. Fusion
- 2021
Explainable Artificial Intelligence (XAI) on TimeSeries Data: A Survey
- Computer ScienceArXiv
- 2021
This paper presents an overview of existing explainable AI (XAI) methods applied on time series and illustrates the type of explanations they produce and provides a reflection on the impact of these explanation methods to provide confidence and trust in the AI systems.
Explainable Artificial Intelligence Approaches: A Survey
- Computer ScienceArXiv
- 2021
This work demonstrates popular XAI methods with a mutual case study/task, provides meaningful insight on quantifying explainability, and recommends paths towards responsible or human-centered AI using XAI as a medium to understand, compare, and correlate competitive advantages of popularXAI methods.
Explainable AI: A Review of Machine Learning Interpretability Methods
- Computer ScienceEntropy
- 2021
This study focuses on machine learning interpretability methods; more specifically, a literature review and taxonomy of these methods are presented, as well as links to their programming implementations, in the hope that this survey would serve as a reference point for both theorists and practitioners.
Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey
- Computer ScienceArXiv
- 2020
A taxonomy and categorizing the XAI techniques based on their scope of explanations, methodology behind the algorithms, and explanation level or usage which helps build trustworthy, interpretable, and self-explanatory deep learning models is proposed.
XGNN: Towards Model-Level Explanations of Graph Neural Networks
- Computer ScienceKDD
- 2020
This work proposes a novel approach, known as XGNN, to interpret GNNs at the model-level by training a graph generator so that the generated graph patterns maximize a certain prediction of the model.
Unmasking Clever Hans predictors and assessing what machines really learn
- Computer ScienceNature Communications
- 2019
The authors investigate how these methods approach learning in order to assess the dependability of their decision making and propose a semi-automated Spectral Relevance Analysis that provides a practically effective way of characterizing and validating the behavior of nonlinear learning machines.
Towards multi-modal causability with Graph Neural Networks enabling information fusion for explainable AI
- Computer ScienceInf. Fusion
- 2021