Explaining neural network predictions of material strength

@article{Palmer2021ExplainingNN,
  title={Explaining neural network predictions of material strength},
  author={Ian Palmer and Terrell N. Mundhenk and Brian J. Gallagher and Yong Han},
  journal={ArXiv},
  year={2021},
  volume={abs/2111.03729}
}
We recently developed a deep learning method that can determine the critical peak stress of a material by looking at scanning electron microscope (SEM) images of the material’s crystals. However, it has been somewhat unclear what kind of image features the network is keying off of when it makes its prediction. It is common in computer vision to employ an explainable AI saliency map to tell one what parts of an image are important to the network’s decision. One can usually deduce the important… 

Figures and Tables from this paper

Artificial intelligence approaches for materials-by-design of energetic materials: state-of-the-art, challenges, and future directions

This paper aims to review recent advances in AI-driven materials-by-design and their applications to energetic materials (EM), and suggests a few promising future research directions for EM materials- by-design, such as meta-learning, active learning, Bayesian learning, and semi-/weakly-supervised learning.

References

SHOWING 1-10 OF 13 REFERENCES

Predicting compressive strength of consolidated molecular solids using computer vision and deep learning

Full-Gradient Representation for Neural Network Visualization

This work introduces a new tool for interpreting neural nets, namely full-gradients, which decomposes the neural net response into input sensitivity and per-neuron sensitivity components, and proposes an approximate saliency map representation for convolutional nets dubbed FullGrad, obtained by aggregating the full-gradient components.

A Perception-Inspired Deep Learning Framework for Predicting Perceptual Texture Similarity

This work proposes a novel framework in order to predict perceptual similarity between two texture images that considers both powerful features and perceptual characteristics of contours extracted from the images using Convolutional Neural Networks.

Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks.

This paper proposes a generalized method called Grad-CAM++ that can provide better visual explanations of CNN model predictions, in terms of better object localization as well as explaining occurrences of multiple object instances in a single image, when compared to state-of-the-art.

Efficient Saliency Maps for Explainable AI

We describe an explainable AI saliency map method for use with deep convolutional neural networks (CNN) that is much more efficient than popular fine-resolution gradient methods. It is also

Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization

This work proposes a technique for producing ‘visual explanations’ for decisions from a large class of Convolutional Neural Network (CNN)-based models, making them more transparent and explainable, and shows that even non-attention based models learn to localize discriminative regions of input image.

SmoothGrad: removing noise by adding noise

SmoothGrad is introduced, a simple method that can help visually sharpen gradient-based sensitivity maps and lessons in the visualization of these maps are discussed.

Striving for Simplicity: The All Convolutional Net

It is found that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.

Describing Textures in the Wild

This work identifies a vocabulary of forty-seven texture terms and uses them to describe a large dataset of patterns collected "in the wild", and shows that they both outperform specialized texture descriptors not only on this problem, but also in established material recognition datasets.