Squeeze-and-Excitation Networks

@article{Hu2020SqueezeandExcitationN,
  title={Squeeze-and-Excitation Networks},
  author={Jie Hu and Li Shen and Samuel Albanie and Gang Sun and E. Wu},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2020},
  volume={42},
  pages={2011-2023}
}
  • Jie Hu, Li Shen, +2 authors E. Wu
  • Published 2020
  • Computer Science, Medicine
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we… Expand
Recalibrating Fully Convolutional Networks With Spatial and Channel “Squeeze and Excitation” Blocks
TLDR
This paper effectively incorporate the recently proposed “squeeze and excitation” (SE) modules for channel recalibration for image classification in three state-of-the-art F-CNNs and demonstrates a consistent improvement of segmentation accuracy on three challenging benchmark datasets. Expand
Channel Equilibrium Networks for Learning Deep Representation
TLDR
This work proposes to "wake them up" during training by designing a novel neural building block, termed Channel Equilibrium (CE) block, which enables channels at the same layer to contribute equally to the learned representation. Expand
Nested Dense Attention Network for Single Image Super-Resolution
  • Cheng Qiu, Yirong Yao, Yuntao Du
  • Computer Science
  • ICMR
  • 2021
Recently, deep convolutional neural networks (CNNs) are widely used in single image super-resolution (SISR) and have recorded impressive performance. However, most of the existing CNNs architecturesExpand
Volumetric Transformer Networks
TLDR
This work proposes a loss function defined between the warped features of pairs of instances, which improves the localization ability of VTN and consistently boosts the features' representation power and consequently the networks' accuracy on fine-grained image recognition and instance-level image retrieval. Expand
Single Image Super-Resolution via Squeeze and Excitation Network
TLDR
The Squeeze and Excitation network is introduced to evaluate the importance of different feature maps while building the network and can enhance the restoration performance and achieve the state-of-the-art results in super-resolution task. Expand
Second-Order Attention Network for Single Image Super-Resolution
TLDR
Experimental results demonstrate the superiority of the SAN network over state-of-the-art SISR methods in terms of both quantitative metrics and visual quality. Expand
Gather-Excite: Exploiting Feature Context in Convolutional Neural Networks
TLDR
This work proposes a simple, lightweight solution to the issue of limited context propagation in ConvNets, which propagates context across a group of neurons by aggregating responses over their extent and redistributing the aggregates back through the group. Expand
Competitive Inner-Imaging Squeeze and Excitation for Residual Network
TLDR
This work proposes a competitive squeeze-excitation mechanism for the residual network, and designs a novel inner-imaging competitive SE block to shrink the consumption and re-image the global features of intermediate network structure, which can model the channel-wise relations with convolution in spatial. Expand
Image super-resolution via channel attention and spatial graph convolutional network
TLDR
This work proposes a channel attention and spatial graph convolutional network (CASGCN) for more powerful feature obtaining and feature correlations modeling and shows the effectiveness of the CASGCN in terms of quantitative and visual results. Expand
Feedback Pyramid Attention Networks for Single Image Super-Resolution
TLDR
A novel feedback connection structure is developed to enhance low-level feature expression with high-level information and introduce a pyramid non-local structure to model global contextual information in different scales and improve the discriminative representation of the network. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 90 REFERENCES
Gather-Excite: Exploiting Feature Context in Convolutional Neural Networks
TLDR
This work proposes a simple, lightweight solution to the issue of limited context propagation in ConvNets, which propagates context across a group of neurons by aggregating responses over their extent and redistributing the aggregates back through the group. Expand
Deep Pyramidal Residual Networks
TLDR
This research gradually increases the feature map dimension at all units to involve as many locations as possible in the network architecture and proposes a novel residual unit capable of further improving the classification accuracy with the new network architecture. Expand
Improved Regularization of Convolutional Neural Networks with Cutout
TLDR
This paper shows that the simple regularization technique of randomly masking out square regions of input during training, which is called cutout, can be used to improve the robustness and overall performance of convolutional neural networks. Expand
Aggregated Residual Transformations for Deep Neural Networks
TLDR
On the ImageNet-1K dataset, it is empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy and is more effective than going deeper or wider when the authors increase the capacity. Expand
Network In Network
TLDR
With enhanced local modeling via the micro network, the proposed deep network structure NIN is able to utilize global average pooling over feature maps in the classification layer, which is easier to interpret and less prone to overfitting than traditional fully connected layers. Expand
CBAM: Convolutional Block Attention Module
TLDR
The proposed Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks, can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs. Expand
Dual Path Networks
TLDR
This work reveals the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, and finds that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations. Expand
SCA-CNN: Spatial and Channel-Wise Attention in Convolutional Networks for Image Captioning
TLDR
This paper introduces a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN that significantly outperforms state-of-the-art visual attention-based image captioning methods. Expand
Densely Connected Convolutional Networks
TLDR
The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. Expand
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. Expand
...
1
2
3
4
5
...