• Corpus ID: 211032256

Using Explainable Artificial Intelligence to Increase Trust in Computer Vision

@article{Meske2020UsingEA,
  title={Using Explainable Artificial Intelligence to Increase Trust in Computer Vision},
  author={Christian Meske and Enrico Bunde},
  journal={ArXiv},
  year={2020},
  volume={abs/2002.01543}
}
Computer Vision, and hence Artificial Intelligence-based extraction of information from images, has increasingly received attention over the last years, for instance in medical diagnostics. While the algorithms' complexity is a reason for their increased performance, it also leads to the "black box" problem, consequently decreasing trust towards AI. In this regard, "Explainable Artificial Intelligence" (XAI) allows to open that black box and to improve the degree of AI transparency. In this… 

Figures and Tables from this paper

A Highly Transparent and Explainable Artificial Intelligence Tool for Chronic Wound Classification: XAI-CWC

The proposed method successfully provides chronic wound classification and its associated explanation and this hybrid approach is shown to aid with the interpretation and understanding of AI decision-making.

Learning Effective Feature Representation against User Privacy Protection on Social Networks

A novel NRL model to generate node embeddings that can afford data incompleteness coming from user privacy protection is presented and a structure-attribute enhanced matrix (SAEM) is proposed to alleviate data sparsity and develop a community-cluster informed NRL method, c2n2v, to further improve the quality of embedding learning.

Explainable artificial intelligence enhances the ecological interpretability of black‐box species distribution models

This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

The Enlightening Role of Explainable Artificial Intelligence in Chronic Wound Classification

The proposed method successfully provides chronic wound classification and its associated explanation to extract additional knowledge that can also be interpreted by non-data-science experts, such as medical scientists and physicians.

Performance of Two Approaches of Embedded Recommender Systems

An optimized algorithm and a parallel hardware implementation is presented as good approach for running embedded collaborative filtering applications and is competitive in embedded applications when considering large datasets and parallel implementations based on reconfigurable hardware.

References

SHOWING 1-10 OF 70 REFERENCES

Explainable artificial intelligence: A survey

Recent developments in XAI in supervised learning are summarized, a discussion on its connection with artificial general intelligence is started, and proposals for further research directions are given.

"Why Should I Trust You?": Explaining the Predictions of Any Classifier

LIME is proposed, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning aninterpretable model locally varound the prediction.

Artificial Intelligence in medical imaging practice: looking to the future

It is argued that there are vital changes to entry and advanced curricula together with national professional capabilities to ensure machine‐learning tools are used in the safest and most effective manner for the authors' patients.

Artificial Intelligence in Medical Imaging

This review will attempt to summarize the evolving philosophy and mechanisms behind the AI movement as well as the current applications, limitations, and future directions of the field.

Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)

This survey provides an entry point for interested researchers and practitioners to learn key aspects of the young and rapidly growing body of research related to XAI, and review the existing approaches regarding the topic, discuss trends surrounding its sphere, and present major research trajectories.

Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence

A normative framework is built on an analysis of 'opacity' from philosophy of science, modeled after David Marr's influential account of explanation in cognitive science, and specifies the general way in which questions about an opaque computing system should be answered.

Deep-learned faces of pain and emotions: Elucidating the differences of facial expressions with the help of explainable AI methods

The aim of this paper is to investigate the explainable AI methods Layer-wise Relevance Propagation (LRP) and Local Interpretable Model-agnostic Explanations (LIME), applied to explain how a deep neural network distinguishes facial expressions of pain from facial expression of emotions such as happiness and disgust.

Opening the Black Box of Deep Neural Networks via Information

This work demonstrates the effectiveness of the Information-Plane visualization of DNNs and shows that the training time is dramatically reduced when adding more hidden layers, and the main advantage of the hidden layers is computational.

Local Interpretable Model-Agnostic Explanations for Classification of Lymph Node Metastases

This work aims to generate explanations on how a Convolutional Neural Network detects tumor tissue in patches extracted from histology whole slide images using the “locally-interpretable model-agnostic explanations” methodology.

Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images

This study evaluates the performance of pre-trained CNN based DL models as feature extractors toward classifying parasitized and uninfected cells to aid in improved disease screening and experimentally determines the optimal model layers for feature extraction from the underlying data.
...