• Corpus ID: 238744303

When saliency goes off on a tangent: Interpreting Deep Neural Networks with nonlinear saliency maps

@article{Rosenzweig2021WhenSG,
  title={When saliency goes off on a tangent: Interpreting Deep Neural Networks with nonlinear saliency maps},
  author={Jan Rosenzweig and Zoran Cvetkovi{\'c} and Ivana Rosenzweig},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.06639}
}
A fundamental bottleneck in utilising complex machine learning systems for critical applications has been not knowing why they do and what they do, thus preventing the development of any crucial safety protocols. To date, no method exist that can provide full insight into the granularity of the neural network’s decision process. In past, saliency maps were an early attempt at resolving this problem through sensitivity calculations, whereby dimensions of a data point are selected based on how… 

References

SHOWING 1-10 OF 10 REFERENCES
Saliency Prediction in the Deep Learning Era: Successes and Limitations
  • A. Borji
  • Computer Science, Medicine
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2021
TLDR
A large number of image and video saliency models are reviewed and compared over two image benchmarks and two large scale video datasets and factors that contribute to the gap between models and humans are identified.
Deep learning saliency maps do not accurately highlight diagnostically relevant regions for medical image interpretation
TLDR
It is demonstrated that the most commonly used saliency map generating method, Grad-CAM, results in low performance for 10 pathologies on chest X-rays and demonstrates that several important limitations of interpretability techniques for medical imaging must be addressed before use in clinical workflows.
Assessing the (Un)Trustworthiness of Saliency Maps for Localizing Abnormalities in Medical Imaging
TLDR
It is suggested that saliency map usage in the high-risk domain of medical imaging warrants additional scrutiny and recommend that detection or segmentation models be used if localization is the desired output of the network.
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
TLDR
This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets), and establishes the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks.
Rethinking the Inception Architecture for Computer Vision
TLDR
This work is exploring ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization.
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Xception: Deep Learning with Depthwise Separable Convolutions
  • François Chollet
  • Computer Science, Mathematics
    2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
This work proposes a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions, and shows that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset, and significantly outperforms it on a larger image classification dataset.
Research shows AI is often biased. Here's how to make algorithms work for all of us
  • World Economic Forum
  • 2021
Amazon scraps secret AI recruiting tool that showed bias against women
  • Reuters
  • 2018