Homogeneous Vector Capsules Enable Adaptive Gradient Descent in Convolutional Neural Networks

@article{Byerly2021HomogeneousVC,
  title={Homogeneous Vector Capsules Enable Adaptive Gradient Descent in Convolutional Neural Networks},
  author={Adam Byerly and T. Kalganova},
  journal={IEEE Access},
  year={2021},
  volume={9},
  pages={48519-48530}
}
Neural networks traditionally produce a scalar value for an activated neuron. Capsules, on the other hand, produce a vector of values, which has been shown to correspond to a single, composite feature wherein the values of the components of the vectors indicate properties of the feature such as transformation or contrast. We present a new way of parameterizing and training capsules that we refer to as homogeneous vector capsules (HVCs). We demonstrate, experimentally, that altering a… Expand
A Branching and Merging Convolutional Network with Homogeneous Filter Capsules
TLDR
A convolutional neural network design with additional branches after certain convolutions so that it can extract features with differing effective receptive fields and levels of abstraction establishes a new state of the art for the MNIST dataset with an accuracy of 99.84%. Expand
No routing needed between capsules
TLDR
This study shows that a simple convolutional neural network using HVCs performs as well as the prior best performing capsule network on MNIST using 5.5× fewer parameters, 4× fewer training epochs, no reconstruction sub-network, and requiring no routing mechanism. Expand

References

SHOWING 1-10 OF 44 REFERENCES
Network In Network
TLDR
With enhanced local modeling via the micro network, the proposed deep network structure NIN is able to utilize global average pooling over feature maps in the classification layer, which is easier to interpret and less prone to overfitting than traditional fully connected layers. Expand
Xception: Deep Learning with Depthwise Separable Convolutions
  • François Chollet
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
This work proposes a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions, and shows that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset, and significantly outperforms it on a larger image classification dataset. Expand
Effects of Degradations on Deep Neural Network Architectures
TLDR
This first study on the performance of CapsuleNet (CapsNet) and other state-of-the-art CNN architectures under different types of image degradations is demonstrated and a network setup is proposed that can enhance the robustness of any CNN architecture for certain degradation. Expand
Improving the Robustness of Capsule Networks to Image Affine Transformations
  • Jindong Gu, Volker Tresp
  • Computer Science
  • 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
It is revealed that the routing procedure contributes neither to the generalization ability nor to the affine robustness of the CapsNets, and an affine CapsNet (Aff-CapsNets) is proposed, which are more robust to affine transformations. Expand
Building Deep, Equivariant Capsule Networks
TLDR
An alternative framework for capsule networks is proposed that learns to projectively encode the manifold of pose-variations, termed the space-of-variation (SOV), for every capsule-type of each layer, using a trainable, equivariant function defined over a grid of group-transformations. Expand
Dynamic Routing Between Capsules
TLDR
It is shown that a discrimininatively trained, multi-layer capsule system achieves state-of-the-art performance on MNIST and is considerably better than a convolutional net at recognizing highly overlapping digits. Expand
Transforming Auto-Encoders
TLDR
It is argued that neural networks can be used to learn features that output a whole vector of instantiation parameters and this is a much more promising way of dealing with variations in position, orientation, scale and lighting than the methods currently employed in the neural networks community. Expand
Path Capsule Networks
TLDR
This work introduces Path Capsule Network (PathCapsNet), a deep parallel multi-path version of CapsNet that shows that a judicious coordination of depth, max-pooling, regularization by DropCircuit and a new fan-in routing by agreement technique can achieve better or comparable results to CapsNet, while further reducing the parameter count significantly. Expand
Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
TLDR
Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Expand
...
1
2
3
4
5
...