Accuracy Booster: Performance Boosting using Feature Map Re-calibration

@article{Singh2020AccuracyBP,
  title={Accuracy Booster: Performance Boosting using Feature Map Re-calibration},
  author={Pravendra Singh and Pratik Mazumder and Vinay P. Namboodiri},
  journal={2020 IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2020},
  pages={873-882}
}
Convolution Neural Networks (CNN) have been extremely successful in solving intensive computer vision tasks. [] Key Method We propose an architectural block that introduces much lower complexity than the existing methods of CNN performance boosting while performing significantly better than them. We carry out experiments on the CIFAR, ImageNet and MS-COCO datasets, and show that the proposed block can challenge the state-of-the-art results. Our method boosts the ResNet-50 architecture to perform comparably…
EDS pooling layer
SkipConv: Skip Convolution for Computationally Efficient Deep CNNs
TLDR
This paper proposes a novel skip convolution operation that employs significantly fewer computation as compared to the traditional one without sacrificing model accuracy, and shows empirically that the proposed convolution works well for other tasks such as object detection.
Passive Batch Injection Training Technique: Boosting Network Performance by Injecting Mini-Batches from a different Data Distribution
TLDR
The proposed technique, namely Passive Batch Injection Training Technique (PBITT), even reduces the level of overfitting in networks that already use the standard techniques for reducing overfitting such as L2 regularization and batch normalization, resulting in significant accuracy improvements.
Reconstruction Student with Attention for Student-Teacher Pyramid Matching
TLDR
A powerful method which compensates for the shortcomings of Student-Teacher Feature Pyramid Matching (STPM), which can be trained from only normal images with small number of epochs is proposed.
CPWC: Contextual Point Wise Convolution for Object Recognition
TLDR
This work proposes an alternative design for pointwise convolution, which uses spatial information from the input efficiently and significantly improves the performance of the networks without substantially increasing the number of parameters and computations.
CSL Net: Convoluted SE and LSTM Blocks Based Network for Automatic Image Annotation
  • Computer Science
  • 2019
TLDR
A new model CSL Net is proposed as a combination of convoluted squeeze and excitation block with Bi-LSTM blocks to predict tags for images to yield better results compared to that of the existing methods in terms of precision, recall and accuracy.
Leveraging Filter Correlations for Deep Model Compression
TLDR
The proposed compression method yields state-of-the-art FLOPs compression rates on various benchmarks, such as LeNet-5, VGG-16, and ResNet-50,56, while still achieving excellent predictive performance for tasks such as object detection on benchmark datasets.
Cooperative Initialization based Deep Neural Network Training
TLDR
A cooperative initialization for training the deep network using ReLU activation function to improve the network performance using multiple activation functions in the initial few epochs for the update of all sets of weight parameters while training the network.
Adversarial Mutual Leakage Network for Cell Image Segmentation
TLDR
Three segmentation methods using GAN and information leakage between generator and discriminator and an Adversarial Mutual Leakage Network that mutually leaks the information each other between the generator and the discriminator are proposed.
Facial Expression Recognition using Residual Convnet with Image Augmentations
TLDR
From this study, the proposed method outperformed plain ResNet in all test scenarios without transfer learning, and there is a potential for better performance with the pre-training model.
...
...

References

SHOWING 1-10 OF 39 REFERENCES
Squeeze-and-Excitation Networks
TLDR
This work proposes a novel architectural unit, which is term the “Squeeze-and-Excitation” (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels and shows that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets.
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Learning Transferable Architectures for Scalable Image Recognition
TLDR
This paper proposes to search for an architectural building block on a small dataset and then transfer the block to a larger dataset and introduces a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models.
Stability Based Filter Pruning for Accelerating Deep CNNs
TLDR
This work presents a stability-based approach for filter-level pruning of CNNs that reduces the number of FLOPS and GPU memory footprint and is significantly outperforming other state-of-the-art filter pruning methods.
CBAM: Convolutional Block Attention Module
TLDR
The proposed Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks, can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs.
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
TLDR
This work introduces a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals and further merge RPN and Fast R-CNN into a single network by sharing their convolutionAL features.
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.
Gather-Excite: Exploiting Feature Context in Convolutional Neural Networks
TLDR
This work proposes a simple, lightweight solution to the issue of limited context propagation in ConvNets, which propagates context across a group of neurons by aggregating responses over their extent and redistributing the aggregates back through the group.
Going deeper with convolutions
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition
Densely Connected Convolutional Networks
TLDR
The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
...
...