Visualizing image content to explain novel image discovery

@article{Lee2020VisualizingIC,
  title={Visualizing image content to explain novel image discovery},
  author={Jake H. Lee and Kiri L. Wagstaff},
  journal={Data Mining and Knowledge Discovery},
  year={2020},
  pages={1 - 28}
}
The initial analysis of any large data set can be divided into two phases: (1) the identification of common trends or patterns and (2) the identification of anomalies or outliers that deviate from those trends. We focus on the goal of detecting observations with novel content, which can alert us to artifacts in the data set or, potentially, the discovery of previously unknown phenomena. To aid in interpreting and diagnosing the novel aspect of these selected observations, we recommend the use… Expand
1 Citations
What Does CNN Shift Invariance Look Like? A Visualization Study
TLDR
It is concluded that features extracted from popular networks are not globally invariant, and that biases and artifacts exist within this variance, which means that anti-aliased models significantly improve local invariance but do not impact global invariance. Expand

References

SHOWING 1-10 OF 70 REFERENCES
Interpretable Discovery in Large Image Data Sets
TLDR
This work describes a new strategy that combines novelty detection with CNN image features to achieve rapid discovery with interpretable explanations of novel image content. Expand
Visualizing and Understanding Convolutional Networks
TLDR
A novel visualization technique is introduced that gives insight into the function of intermediate feature layers and the operation of the classifier in large Convolutional Network models, used in a diagnostic role to find model architectures that outperform Krizhevsky et al on the ImageNet classification benchmark. Expand
Network Dissection: Quantifying Interpretability of Deep Visual Representations
TLDR
This work uses the proposed Network Dissection method to test the hypothesis that interpretability is an axis-independent property of the representation space, then applies the method to compare the latent representations of various networks when trained to solve different classification problems. Expand
Understanding deep image representations by inverting them
Image representations, from SIFT and Bag of Visual Words to Convolutional Neural Networks (CNNs), are a crucial component of almost any image understanding system. Nevertheless, our understanding ofExpand
Understanding Neural Networks Through Deep Visualization
TLDR
This work introduces several new regularization methods that combine to produce qualitatively clearer, more interpretable visualizations of convolutional neural networks. Expand
Learning and Transferring Mid-level Image Representations Using Convolutional Neural Networks
TLDR
This work designs a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset, and shows that despite differences in image statistics and tasks in the two datasets, the transferred representation leads to significantly improved results for object and action classification. Expand
DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition
TLDR
DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms. Expand
Inverting Visual Representations with Convolutional Networks
  • A. Dosovitskiy, T. Brox
  • Computer Science
  • 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
TLDR
This work proposes a new approach to study image representations by inverting them with an up-convolutional neural network, and applies this method to shallow representations (HOG, SIFT, LBP), as well as to deep networks. Expand
CNN Features Off-the-Shelf: An Astounding Baseline for Recognition
TLDR
A series of experiments conducted for different recognition tasks using the publicly available code and model of the OverFeat network which was trained to perform object classification on ILSVRC13 suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks. Expand
Deep multi-scale video prediction beyond mean square error
TLDR
This work trains a convolutional network to generate future frames given an input sequence and proposes three different and complementary feature learning strategies: a multi-scale architecture, an adversarial training method, and an image gradient difference loss function. Expand
...
1
2
3
4
5
...