Squeeze-and-Excitation Networks

@article{Hu2017SqueezeandExcitationN,
  title={Squeeze-and-Excitation Networks},
  author={Jie Hu and Li Shen and Samuel Albanie and Gang Sun and Enhua Wu},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2017},
  volume={42},
  pages={2011-2023}
}
  • Jie HuLi Shen E. Wu
  • Published 5 September 2017
  • Computer Science
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
The central building block of convolutional neural networks (CNNs) is the convolution operator, which enables networks to construct informative features by fusing both spatial and channel-wise information within local receptive fields at each layer. A broad range of prior research has investigated the spatial component of this relationship, seeking to strengthen the representational power of a CNN by enhancing the quality of spatial encodings throughout its feature hierarchy. In this work, we… 

Recalibrating Fully Convolutional Networks With Spatial and Channel “Squeeze and Excitation” Blocks

This paper effectively incorporate the recently proposed “squeeze and excitation” (SE) modules for channel recalibration for image classification in three state-of-the-art F-CNNs and demonstrates a consistent improvement of segmentation accuracy on three challenging benchmark datasets.

Concurrent Spatial and Channel Squeeze & Excitation in Fully Convolutional Networks

This paper introduces three variants of SE modules for image segmentation, and effectively incorporates these SE modules within three different state-of-the-art F-CNNs (DenseNet, SD-Net, U-Net) and observes consistent improvement of performance across all architectures, while minimally effecting model complexity.

Channel Equilibrium Networks for Learning Deep Representation

This work proposes to "wake them up" during training by designing a novel neural building block, termed Channel Equilibrium (CE) block, which enables channels at the same layer to contribute equally to the learned representation.

Nested Dense Attention Network for Single Image Super-Resolution

This work proposes the nested dense attention network (NDAN) for generating more refined and structured high-resolution images and proposes nested dense structure (NDS) to better integrate features of different levels extracted from different layers.

Single Image Super-Resolution via Squeeze and Excitation Network

The Squeeze and Excitation network is introduced to evaluate the importance of different feature maps while building the network and can enhance the restoration performance and achieve the state-of-the-art results in super-resolution task.

Second-Order Attention Network for Single Image Super-Resolution

Experimental results demonstrate the superiority of the SAN network over state-of-the-art SISR methods in terms of both quantitative metrics and visual quality.

Gather-Excite: Exploiting Feature Context in Convolutional Neural Networks

This work proposes a simple, lightweight solution to the issue of limited context propagation in ConvNets, which propagates context across a group of neurons by aggregating responses over their extent and redistributing the aggregates back through the group.

FCN: Fully Channel-Concatenated Network for Single Image Super-Resolution

A novel fully channel-concatenated network (FCN) is presented to make further mining of representational capacity of deep models, in which all interlayer skips are implemented by a simple and straightforward operation, weighted channel concatenation (WCC), followed by a 1×1 conv layer.

Competitive Inner-Imaging Squeeze and Excitation for Residual Network

This work proposes a competitive squeeze-excitation mechanism for the residual network, and designs a novel inner-imaging competitive SE block to shrink the consumption and re-image the global features of intermediate network structure, which can model the channel-wise relations with convolution in spatial.

Feedback Pyramid Attention Networks for Single Image Super-Resolution

A novel feedback connection structure is developed to enhance low-level feature expression with high-level information and introduce a pyramid non-local structure to model global contextual information in different scales and improve the discriminative representation of the network.
...

References

SHOWING 1-10 OF 88 REFERENCES

Gather-Excite: Exploiting Feature Context in Convolutional Neural Networks

This work proposes a simple, lightweight solution to the issue of limited context propagation in ConvNets, which propagates context across a group of neurons by aggregating responses over their extent and redistributing the aggregates back through the group.

Deep Pyramidal Residual Networks

This research gradually increases the feature map dimension at all units to involve as many locations as possible in the network architecture and proposes a novel residual unit capable of further improving the classification accuracy with the new network architecture.

Improved Regularization of Convolutional Neural Networks with Cutout

This paper shows that the simple regularization technique of randomly masking out square regions of input during training, which is called cutout, can be used to improve the robustness and overall performance of convolutional neural networks.

Aggregated Residual Transformations for Deep Neural Networks

On the ImageNet-1K dataset, it is empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy and is more effective than going deeper or wider when the authors increase the capacity.

Network In Network

With enhanced local modeling via the micro network, the proposed deep network structure NIN is able to utilize global average pooling over feature maps in the classification layer, which is easier to interpret and less prone to overfitting than traditional fully connected layers.

CBAM: Convolutional Block Attention Module

The proposed Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks, can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs.

Dual Path Networks

This work reveals the equivalence of the state-of-the-art Residual Network (ResNet) and Densely Convolutional Network (DenseNet) within the HORNN framework, and finds that ResNet enables feature re-usage while DenseNet enables new features exploration which are both important for learning good representations.

Densely Connected Convolutional Networks

The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters.

SCA-CNN: Spatial and Channel-Wise Attention in Convolutional Networks for Image Captioning

This paper introduces a novel convolutional neural network dubbed SCA-CNN that incorporates Spatial and Channel-wise Attentions in a CNN that significantly outperforms state-of-the-art visual attention-based image captioning methods.

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
...