• Corpus ID: 235358676

DISCO: accurate Discrete Scale Convolutions

@article{Sosnovik2021DISCOAD,
  title={DISCO: accurate Discrete Scale Convolutions},
  author={Ivan Sosnovik and Artem Moskalev and Arnold W. M. Smeulders},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.02733}
}
Scale is often seen as a given, disturbing factor in many vision tasks. When doing so it is one of the factors why we need more data during learning. In recent work scale equivariance was added to convolutional neural networks. It was shown to be effective for a range of tasks. We aim for accurate scale-equivariant convolutional neural networks (SE-CNNs) applicable for problems where high granularity of scale and small filter sizes are required. Current SECNNs rely on weight sharing and filter… 

Figures and Tables from this paper

Scale-Equivariant Unrolled Neural Networks for Data-Efficient Accelerated MRI Reconstruction
TLDR
This work proposes modeling the proximal operators of unrolled neural networks with scale-equivariant convolutional neural networks in order to improve the data-efficiency and robustness to drifts in scale of the images that might stem from the variability of patient anatomies or change in change in different MRI scanners.
Wiggling Weights to Improve the Robustness of Classifiers
TLDR
It is concluded that wiggled transform-augmented networks acquire good robustness even for perturbations not seen during training, and even improves the classification of unperturbed, clean images substantially.
Scale-invariant scale-channel networks: Deep networks that generalise to previously unseen scales
TLDR
A formalism for analysing the covariance and invariance properties of scale-channel networks, including exploring their relations to scale-space theory, is developed and a new type of foveated scale- channel architecture is proposed, where the scale channels process increasingly larger parts of the image with decreasing resolution.
Exploiting Redundancy: Separable Group Convolutional Networks on Lie Groups
TLDR
This work investigates the properties of representations learned by regular G-CNNs, and shows considerable parameter redundancy in group convolution kernels, which motivates further weight-tying by sharing convolution kernel over subgroups and provides a continuous parameterisation of separable Convolution kernels.

References

SHOWING 1-10 OF 62 REFERENCES
Scale Equivariant CNNs with Scale Steerable Filters
TLDR
A scale equivariat network is built with the usage of scale steerable filters and improves the perfromance about 2% over other comparable methods of scale equivariance and scale invariance, when run on the FMNIST-scale dataset.
Scale equivariance in CNNs with vector fields
TLDR
This work studies the effect of injecting local scale equivariance into Convolutional Neural Networks and shows that this improves the performance of the model by over 20% in the scale Equivariant task of regressing the scaling factor applied to randomly scaled MNIST digits.
Scale-Equivariant Steerable Networks
TLDR
This work pays attention to scale changes, which regularly appear in various tasks due to the changing distances between the objects and the camera, and introduces the general theory for building scale-equivariant convolutional networks with steerable filters.
Scale-Invariant Convolutional Neural Networks
TLDR
A scale-invariant convolutional neural network (SiCNN), a modeldesigned to incorporate multi-scale feature exaction and classification into the network structure, and results show that SiCNN detects features at various scales, and the classi-cation result exhibits strong robust-ness against object scale variations.
Locally Scale-Invariant Convolutional Neural Networks
TLDR
A simple model is presented that allows ConvNets to learn features in a locally scale-invariant manner without increasing the number of model parameters, and is shown on a modified MNIST dataset that when faced with scale variation, building in scale-Invariance allows Conv net to learn more discriminative features with reduced chances of over-fitting.
Scale-Equivariant Neural Networks with Decomposed Convolutional Filters
TLDR
Numerical experiments demonstrate that the proposed scale-equivariant neural network with decomposed convolutional filters (ScDCFNet) achieves significantly improved performance in multiscale image classification and better interpretability than regular CNNs at a reduced model size.
Warped Convolutions: Efficient Invariance to Spatial Transformations
TLDR
This work presents a construction that is simple and exact, yet has the same computational complexity that standard convolutions enjoy, consisting of a constant image warp followed by a simple convolution, which are standard blocks in deep learning toolboxes.
Harmonic Networks: Deep Translation and Rotation Equivariance
TLDR
H-Nets are presented, a CNN exhibiting equivariance to patch-wise translation and 360-rotation, and it is demonstrated that their layers are general enough to be used in conjunction with the latest architectures and techniques, such as deep supervision and batch normalization.
Deep Pyramidal Residual Networks
TLDR
This research gradually increases the feature map dimension at all units to involve as many locations as possible in the network architecture and proposes a novel residual unit capable of further improving the classification accuracy with the new network architecture.
Deep Scale-spaces: Equivariance Over Scale
TLDR
Deep scale-spaces (DSS) is introduced, a generalization of convolutional neural networks, exploiting the scale symmetry structure of conventional image recognition tasks, and scale equivariant cross-correlations based on a principled extension of convolutions are constructed.
...
1
2
3
4
5
...