A Learnable Scatternet: Locally Invariant Convolutional Layers

@article{Cotter2019ALS,
  title={A Learnable Scatternet: Locally Invariant Convolutional Layers},
  author={Fergal Cotter and Nick G. Kingsbury},
  journal={2019 IEEE International Conference on Image Processing (ICIP)},
  year={2019},
  pages={350-354}
}
  • Fergal CotterN. Kingsbury
  • Published 7 March 2019
  • Computer Science
  • 2019 IEEE International Conference on Image Processing (ICIP)
In this paper, we explore tying together the ideas from Scattering Transforms and Convolutional Neural Networks (CNN) for Image Analysis by proposing a learnable ScatterNet. Previous attempts at tying them together in hybrid networks have tended to keep the two parts separate, with the ScatterNet forming a fixed front end and a CNN forming a learned backend. We instead look at adding learning between scattering orders, as well as adding learned layers before the ScatterNet. We do this by… 

Figures and Tables from this paper

Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey

This survey tries to give a concise overview about different approaches to incorporate geometrical prior knowledge into DNNs, and tries to connect those methods to the field of 3D object detection for autonomous driving, where they expect promising results applying those methods.

From CNNs to Shift-Invariant Twin Wavelet Models

This paper designs a “twin” architecture based on the dual-tree complex wavelet packet transform, which generates similar outputs as standard CNNs with fewer trainable parameters, and outperformed recent antialiasing methods based on low-pass filtering by preserving high-frequency information, while reducing memory usage.

Scattering-Based Hybrid Networks: An Evaluation and Design Guide

  • D. MinskiyM. Bober
  • Computer Science
    2021 IEEE International Conference on Image Processing (ICIP)
  • 2021
This work presents and benchmark a collection of 27 networks, some new learnable extensions to existing designs, all within a framework that allows an assessment of a wide range of scattering types and their effects on the system performance.

Phase Collapse in Neural Networks

It is demonstrated that it is a different phase collapse mechanism which explains the ability to progressively eliminate spatial variability, while improving linear class separation, in deep convolutional image classifiers.

Learnable filter-banks for CNN-based audio applications

This work investigates the design of a convolutional layer where kernels are parameterized functions, which reduces the number of weights to be trained and enables larger kernel sizes, an advantage for audio applications.

On the Shift Invariance of Max Pooling Feature Maps in Convolutional Neural Networks

It is proved that, under speci fic conditions, feature maps computed by the subsequent max pooling operator tend to approximate the modulus of complex Gabor-like coefficients, and as such, are stable with respect to certain input shifts.

Efficient Hybrid Network: Inducting Scattering Features

  • D. MinskiyM. Bober
  • Computer Science
    2022 26th International Conference on Pattern Recognition (ICPR)
  • 2022
An E-HybridNet is introduced, which is the first scattering based approach that consistently outperforms its conventional counterparts on a diverse range of datasets and inherits the key property of prior hybrid networks - an effective generalisation in data-limited scenarios.

Invariant Integration in Deep Convolutional Feature Space

This work applies the proposed layer to explicitly insert invariance properties for vision-related classification tasks, demonstrates the approach for the case of rotation invariance and reports state-of-the-art performance on the Rotated-MNIST dataset.

Parametric Scattering Networks

Focusing on Morlet wavelets, it is proposed to learn the scales, orientations, and aspect ratios of the filters to produce problem-specific parameterizations of the scattering transform, and it is shown that learned versions of this scattering transform yield significant performance gains in small-sample classification settings over the standard scat-tering transform.

References

SHOWING 1-10 OF 29 REFERENCES

Scaling the Scattering Transform: Deep Hybrid Networks

We use the scattering network as a generic and fixed initialization of the first layers of a supervised hybrid deep network. We show that early layers do not necessarily need to be learned, providing

Visualizing and improving scattering networks

  • Fergal CotterN. Kingsbury
  • Computer Science
    2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP)
  • 2017
It is shown that the higher orders of ScatterNets are sensitive to complex, edge-like patterns (checker-boards and rippled edges) that are quite dissimilar from the patterns visualized in second and third layers of Convolutional Neural Networks (CNNs) — the current state of the art Image Classifiers.

A hybrid network: Scattering and Convnet

Densely Connected Convolutional Networks

The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.

Striving for Simplicity: The All Convolutional Net

It is found that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks.

Dynamic Steerable Blocks in Deep Residual Networks

This work investigates the generalized notion of frames designed with image properties in mind, as alternatives to this parametrization, and shows that frame-based ResNets and Densenets can improve performance on Cifar-10+ consistently, while having additional pleasant properties like steerability.

DCFNet: Deep Neural Network with Decomposed Convolutional Filters

This paper proposes to decompose convolutional filters in CNN as a truncated expansion with pre-fixed bases, namely the Decomposed Convolutional Filters network (DCFNet), where the expansion coefficients remain learned from data.

ScatterNet hybrid frameworks for deep learning

This dissertation proposes the ScatterNet Hybrid Framework for Deep Learning that is inspired by the circuitry of the visual cortex and uses a handcrafted front-end, an unsupervised learning based middle-section, and a supervised back-end to rapidly learn hierarchical features from unlabelled data.

Tiny ImageNet Visual Recognition Challenge

This work tries to train a relatively deep network with a large number of filters per convolutional layer to achieve a high accuracy on the test dataset, and trains another classifier that is slightly shallower and has fewer number of parameters several times, to build a dataset that will allow for a thorough study of ensemble techniques.

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.