DISCO: accurate Discrete Scale Convolutions
@article{Sosnovik2021DISCOAD, title={DISCO: accurate Discrete Scale Convolutions}, author={Ivan Sosnovik and Artem Moskalev and Arnold W. M. Smeulders}, journal={ArXiv}, year={2021}, volume={abs/2106.02733} }
Scale is often seen as a given, disturbing factor in many vision tasks. When doing so it is one of the factors why we need more data during learning. In recent work scale equivariance was added to convolutional neural networks. It was shown to be effective for a range of tasks. We aim for accurate scale-equivariant convolutional neural networks (SE-CNNs) applicable for problems where high granularity of scale and small filter sizes are required. Current SECNNs rely on weight sharing and filter…
Figures and Tables from this paper
4 Citations
Scale-Equivariant Unrolled Neural Networks for Data-Efficient Accelerated MRI Reconstruction
- Computer ScienceArXiv
- 2022
This work proposes modeling the proximal operators of unrolled neural networks with scale-equivariant convolutional neural networks in order to improve the data-efficiency and robustness to drifts in scale of the images that might stem from the variability of patient anatomies or change in change in different MRI scanners.
Wiggling Weights to Improve the Robustness of Classifiers
- Computer ScienceArXiv
- 2021
It is concluded that wiggled transform-augmented networks acquire good robustness even for perturbations not seen during training, and even improves the classification of unperturbed, clean images substantially.
Scale-invariant scale-channel networks: Deep networks that generalise to previously unseen scales
- Computer ScienceJournal of Mathematical Imaging and Vision
- 2022
A formalism for analysing the covariance and invariance properties of scale-channel networks, including exploring their relations to scale-space theory, is developed and a new type of foveated scale- channel architecture is proposed, where the scale channels process increasingly larger parts of the image with decreasing resolution.
Exploiting Redundancy: Separable Group Convolutional Networks on Lie Groups
- Computer Science, MathematicsArXiv
- 2021
This work investigates the properties of representations learned by regular G-CNNs, and shows considerable parameter redundancy in group convolution kernels, which motivates further weight-tying by sharing convolution kernel over subgroups and provides a continuous parameterisation of separable Convolution kernels.
References
SHOWING 1-10 OF 62 REFERENCES
Scale Equivariant CNNs with Scale Steerable Filters
- Environmental Science2020 International Conference on Machine Vision and Image Processing (MVIP)
- 2020
A scale equivariat network is built with the usage of scale steerable filters and improves the perfromance about 2% over other comparable methods of scale equivariance and scale invariance, when run on the FMNIST-scale dataset.
Scale equivariance in CNNs with vector fields
- Computer ScienceArXiv
- 2018
This work studies the effect of injecting local scale equivariance into Convolutional Neural Networks and shows that this improves the performance of the model by over 20% in the scale Equivariant task of regressing the scaling factor applied to randomly scaled MNIST digits.
Scale-Equivariant Steerable Networks
- Computer ScienceICLR
- 2020
This work pays attention to scale changes, which regularly appear in various tasks due to the changing distances between the objects and the camera, and introduces the general theory for building scale-equivariant convolutional networks with steerable filters.
Scale-Invariant Convolutional Neural Networks
- Computer ScienceArXiv
- 2014
A scale-invariant convolutional neural network (SiCNN), a modeldesigned to incorporate multi-scale feature exaction and classification into the network structure, and results show that SiCNN detects features at various scales, and the classi-cation result exhibits strong robust-ness against object scale variations.
Locally Scale-Invariant Convolutional Neural Networks
- Computer ScienceArXiv
- 2014
A simple model is presented that allows ConvNets to learn features in a locally scale-invariant manner without increasing the number of model parameters, and is shown on a modified MNIST dataset that when faced with scale variation, building in scale-Invariance allows Conv net to learn more discriminative features with reduced chances of over-fitting.
Scale-Equivariant Neural Networks with Decomposed Convolutional Filters
- Computer ScienceArXiv
- 2019
Numerical experiments demonstrate that the proposed scale-equivariant neural network with decomposed convolutional filters (ScDCFNet) achieves significantly improved performance in multiscale image classification and better interpretability than regular CNNs at a reduced model size.
Warped Convolutions: Efficient Invariance to Spatial Transformations
- MathematicsICML
- 2017
This work presents a construction that is simple and exact, yet has the same computational complexity that standard convolutions enjoy, consisting of a constant image warp followed by a simple convolution, which are standard blocks in deep learning toolboxes.
Harmonic Networks: Deep Translation and Rotation Equivariance
- Computer Science2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
H-Nets are presented, a CNN exhibiting equivariance to patch-wise translation and 360-rotation, and it is demonstrated that their layers are general enough to be used in conjunction with the latest architectures and techniques, such as deep supervision and batch normalization.
Deep Pyramidal Residual Networks
- Computer Science2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
This research gradually increases the feature map dimension at all units to involve as many locations as possible in the network architecture and proposes a novel residual unit capable of further improving the classification accuracy with the new network architecture.
Deep Scale-spaces: Equivariance Over Scale
- Computer ScienceNeurIPS
- 2019
Deep scale-spaces (DSS) is introduced, a generalization of convolutional neural networks, exploiting the scale symmetry structure of conventional image recognition tasks, and scale equivariant cross-correlations based on a principled extension of convolutions are constructed.