Accuracy Booster: Performance Boosting using Feature Map Re-calibration

@article{Singh2020AccuracyBP,
  title={Accuracy Booster: Performance Boosting using Feature Map Re-calibration},
  author={Pravendra Singh and Pratik Mazumder and Vinay P. Namboodiri},
  journal={2020 IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2020},
  pages={873-882}
}
Convolution Neural Networks (CNN) have been extremely successful in solving intensive computer vision tasks. [...] Key Method We propose an architectural block that introduces much lower complexity than the existing methods of CNN performance boosting while performing significantly better than them. We carry out experiments on the CIFAR, ImageNet and MS-COCO datasets, and show that the proposed block can challenge the state-of-the-art results. Our method boosts the ResNet-50 architecture to perform comparably…Expand
EDS pooling layer
TLDR
A new EDS layer (Expansion Downsampling learnable-Scaling) is proposed to replace the existing pooling mechanism in CNNs with a two-step procedure to minimize the information loss by increasing the number of channels in pooling operation. Expand
SkipConv: Skip Convolution for Computationally Efficient Deep CNNs
TLDR
This paper proposes a novel skip convolution operation that employs significantly fewer computation as compared to the traditional one without sacrificing model accuracy, and shows empirically that the proposed convolution works well for other tasks such as object detection. Expand
Passive Batch Injection Training Technique: Boosting Network Performance by Injecting Mini-Batches from a different Data Distribution
TLDR
The proposed technique, namely Passive Batch Injection Training Technique (PBITT), even reduces the level of overfitting in networks that already use the standard techniques for reducing overfitting such as L2 regularization and batch normalization, resulting in significant accuracy improvements. Expand
CPWC: Contextual Point Wise Convolution for Object Recognition
TLDR
This work proposes an alternative design for pointwise convolution, which uses spatial information from the input efficiently and significantly improves the performance of the networks without substantially increasing the number of parameters and computations. Expand
CSL Net: Convoluted SE and LSTM Blocks Based Network for Automatic Image Annotation
  • 2019
Due to advancement of multimedia technology, availability and usage of image and video data is enormous. For indexing and retrieving those data, there is a need for an efficient technique. Now,Expand
Leveraging Filter Correlations for Deep Model Compression
TLDR
The proposed compression method yields state-of-the-art FLOPs compression rates on various benchmarks, such as LeNet-5, VGG-16, and ResNet-50,56, while still achieving excellent predictive performance for tasks such as object detection on benchmark datasets. Expand
Cooperative Initialization based Deep Neural Network Training
TLDR
A cooperative initialization for training the deep network using ReLU activation function to improve the network performance using multiple activation functions in the initial few epochs for the update of all sets of weight parameters while training the network. Expand
A "Network Pruning Network" Approach to Deep Model Compression
TLDR
This work presents a filter pruning approach for deep model compression, using a multitask network that can prune the network in one go and does not require specifying the degree of pruning for each layer (and can learn it instead). Expand
Facial Expression Recognition using Residual Convnet with Image Augmentations
During the COVID-19 pandemic, many offline activities are turned into online activities via video meetings to prevent the spread of the COVID 19 virus. In the online video meeting, someExpand
Feedback Attention for Cell Image Segmentation
TLDR
This paper addresses cell image segmentation task by Feedback Attention mechanism like feedback processing and proposes some Feedback Attentions which imitate human brain and feeds back the feature maps of output layer to close layer to the input. Expand
...
1
2
...

References

SHOWING 1-10 OF 39 REFERENCES
Squeeze-and-Excitation Networks
TLDR
This work proposes a novel architectural unit, which is term the “Squeeze-and-Excitation” (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels and shows that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. Expand
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. Expand
Learning Transferable Architectures for Scalable Image Recognition
TLDR
This paper proposes to search for an architectural building block on a small dataset and then transfer the block to a larger dataset and introduces a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. Expand
Stability Based Filter Pruning for Accelerating Deep CNNs
TLDR
This work presents a stability-based approach for filter-level pruning of CNNs that reduces the number of FLOPS and GPU memory footprint and is significantly outperforming other state-of-the-art filter pruning methods. Expand
CBAM: Convolutional Block Attention Module
TLDR
The proposed Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks, can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs. Expand
Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks
TLDR
This work introduces a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals and further merge RPN and Fast R-CNN into a single network by sharing their convolutionAL features. Expand
ImageNet classification with deep convolutional neural networks
TLDR
A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective. Expand
Gather-Excite: Exploiting Feature Context in Convolutional Neural Networks
TLDR
This work proposes a simple, lightweight solution to the issue of limited context propagation in ConvNets, which propagates context across a group of neurons by aggregating responses over their extent and redistributing the aggregates back through the group. Expand
Going deeper with convolutions
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual RecognitionExpand
Play and Prune: Adaptive Filter Pruning for Deep Model Compression
TLDR
This work presents a new min-max framework for filter-level pruning of CNNs, which reduces the number of parameters of VGG-16 by an impressive factor of 17.5X, and number of FLOPS by 6.43X, with no loss of accuracy, significantly outperforming other state-of-the-art filter pruning methods. Expand
...
1
2
3
4
...