• Corpus ID: 239998450

ConAM: Confidence Attention Module for Convolutional Neural Networks

@article{Xue2021ConAMCA,
  title={ConAM: Confidence Attention Module for Convolutional Neural Networks},
  author={Yu Xue and Ziming Yuan and Ferrante Neri},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.14369}
}
The so-called “attention” is an efficient mechanism to improve the performance of convolutional neural networks. It uses contextual information to recalibrate the input to strengthen the propagation of informative features. However, the majority of the attention mechanisms only consider either local or global contextual information, which is singular to extract features. Moreover, many existing mechanisms directly use the contextual information to recalibrate the input, which unilaterally… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 52 REFERENCES
Gather-Excite: Exploiting Feature Context in Convolutional Neural Networks
TLDR
This work proposes a simple, lightweight solution to the issue of limited context propagation in ConvNets, which propagates context across a group of neurons by aggregating responses over their extent and redistributing the aggregates back through the group.
BAM: Bottleneck Attention Module
TLDR
A simple and effective attention module, named Bottleneck Attention Module (BAM), that can be integrated with any feed-forward convolutional neural networks, that infers an attention map along two separate pathways, channel and spatial.
ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Networks
TLDR
This paper proposes an Efficient Channel Attention (ECA) module, which only involves a handful of parameters while bringing clear performance gain, and develops a method to adaptively select kernel size of 1D convolution, determining coverage of local cross-channel interaction.
CBAM: Convolutional Block Attention Module
TLDR
The proposed Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks, can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs.
Densely Connected Convolutional Networks
TLDR
The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.
Striving for Simplicity: The All Convolutional Net
TLDR
It is found that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks.
Spanet: Spatial Pyramid Attention Network for Enhanced Image Recognition
  • Jingda Guo, Xu Ma, +6 authors Song Fu
  • Computer Science
    2020 IEEE International Conference on Multimedia and Expo (ICME)
  • 2020
TLDR
The experimental results show that the SPANet significantly improves the recognition accuracy without introducing much computation overhead compared with other CNN models.
Going deeper with convolutions
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition
Improved Regularization of Convolutional Neural Networks with Cutout
TLDR
This paper shows that the simple regularization technique of randomly masking out square regions of input during training, which is called cutout, can be used to improve the robustness and overall performance of convolutional neural networks.
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
...
1
2
3
4
5
...