CBAM: Convolutional Block Attention Module

@inproceedings{Woo2018CBAMCB,
  title={CBAM: Convolutional Block Attention Module},
  author={Sanghyun Woo and Jongchan Park and Joon-Young Lee and In-So Kweon},
  booktitle={ECCV},
  year={2018}
}
We propose Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks. [...] Key Result Our experiments show consistent improvements in classification and detection performances with various models, demonstrating the wide applicability of CBAM. The code and models will be publicly available.Expand
BA^2M: A Batch Aware Attention Module for Image Classification
TLDR
A batch aware attention module (BAM) for feature enrichment from a distinctive perspective that can boost the performance of various network architectures and outperforms many classical attention methods. Expand
BAM: Bottleneck Attention Module
TLDR
A simple and effective attention module, named Bottleneck Attention Module (BAM), that can be integrated with any feed-forward convolutional neural networks, that infers an attention map along two separate pathways, channel and spatial. Expand
DCANet: Learning Connected Attentions for Convolutional Neural Networks
  • Xu Ma, Jingda Guo, +4 authors Song Fu
  • Computer Science
  • 2021 IEEE International Conference on Multimedia and Expo (ICME)
  • 2021
TLDR
Deep Connected Attention Network (DCANet), a novel design that boosts attention modules in a CNN model without any modification of the internal structure, to achieve this, interconnect adjacent attention blocks, making information flow among attention blocks possible. Expand
Convolutional Neural Network optimization via Channel Reassessment Attention module
TLDR
A novel network optimization module called Channel Reassessment Attention (CRA) module which uses channel attentions with spatial information of feature maps to enhance representational power of networks. Expand
Nonlocal spatial attention module for image classification
TLDR
A nonlocal spatial attention module (NL-SAM), which collects context information from all pixels to adaptively recalibrate spatial responses in a convolutional feature map and overcomes the limitations of repeating local operations and exports a 2D spatial attention map to emphasize or suppress responses in different locations. Expand
A Simple and Light-Weight Attention Module for Convolutional Neural Networks
TLDR
This work studies the effect of attention in convolutional neural networks and presents the idea in a simple self-contained module, called Bottleneck Attention Module (BAM), which efficiently produces the attention map along two factorized axes, channel and spatial with negligible overheads. Expand
An Attention Module for Convolutional Neural Networks
TLDR
This work proposes an attention module for convolutional neural networks by developing an AW-convolution, where the shape of attention maps matches that of the weights rather than the activations, and shows the effectiveness of this module on several datasets for image classification and object detection tasks. Expand
Channel Transformer Network
TLDR
A novel parameter free method named Channel Transformer Network (CTN) is proposed to decrease or increase channels for convolutional Neural Networks modules whilst keeping most information with lower computation complexity, which can be used in other vision tasks like image classification and object detection etc. Expand
DFA: Improving Convolutional Networks with Dual Fusion Attention Module
TLDR
This work presents a Dual Fusion Attention (DFA) module that can tune the distribution of feature by producing an attention mask which relying on dual fusion of spatial location and channel information so that every corresponding feature representation can adaptively enrich its discriminative regions and minimize the influence of background noise. Expand
AFINet: Attentive Feature Integration Networks for Image Classification
TLDR
Attentive Feature Integration modules are designed which can be applicable to most recent network architectures, leading to new architectures named as AFINets, which can by adaptively integrate distinct information through explicitly modeling the subordinate relationship between different levels of features. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 43 REFERENCES
BAM: Bottleneck Attention Module
TLDR
A simple and effective attention module, named Bottleneck Attention Module (BAM), that can be integrated with any feed-forward convolutional neural networks, that infers an attention map along two separate pathways, channel and spatial. Expand
Residual Attention Network for Image Classification
TLDR
The proposed Residual Attention Network is a convolutional neural network using attention mechanism which can incorporate with state-of-art feed forward network architecture in an end-to-end training fashion and can be easily scaled up to hundreds of layers. Expand
Squeeze-and-Excitation Networks
TLDR
This work proposes a novel architectural unit, which is term the “Squeeze-and-Excitation” (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels and shows that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. Expand
SCA-CNN: Spatial and Channel-Wise Attention in Convolutional Networks for Image Captioning
TLDR
This paper introduces a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN that significantly outperforms state-of-the-art visual attention-based image captioning methods. Expand
Deep Pyramidal Residual Networks
TLDR
This research gradually increases the feature map dimension at all units to involve as many locations as possible in the network architecture and proposes a novel residual unit capable of further improving the classification accuracy with the new network architecture. Expand
Going deeper with convolutions
We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual RecognitionExpand
Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer
TLDR
This work shows that, by properly defining attention for convolutional neural networks, this type of information can be used in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. Expand
Learning Deep Features for Discriminative Localization
In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network (CNN) to have remarkable localization abilityExpand
Xception: Deep Learning with Depthwise Separable Convolutions
  • François Chollet
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
This work proposes a novel deep convolutional neural network architecture inspired by Inception, where Inception modules have been replaced with depthwise separable convolutions, and shows that this architecture, dubbed Xception, slightly outperforms Inception V3 on the ImageNet dataset, and significantly outperforms it on a larger image classification dataset. Expand
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. Expand
...
1
2
3
4
5
...