GPCA: A Probabilistic Framework for Gaussian Process Embedded Channel Attention.

  title={GPCA: A Probabilistic Framework for Gaussian Process Embedded Channel Attention.},
  author={Jiyang Xie and Zhanyu Ma and Dongliang Chang and Guoqiang Zhang and Jun Guo},
  journal={IEEE transactions on pattern analysis and machine intelligence},
  • Jiyang Xie, Zhanyu Ma, +2 authors Jun Guo
  • Published 10 March 2020
  • Computer Science, Medicine, Mathematics
  • IEEE transactions on pattern analysis and machine intelligence
Channel attention mechanisms have been commonly applied in many visual tasks for effective performance improvement. It is able to reinforce the informative channels as well as to suppress the useless channels. Recently, different channel attention modules have been proposed and implemented in various ways. Generally speaking, they are mainly based on convolution and pooling operations. In this paper, we propose Gaussian process embedded channel attention (GPCA) module and further interpret the… Expand
Towards A Universal Model for Cross-Dataset Crowd Counting
  • Zhiheng Ma, Xiaopeng Hong, Xing Wei, Yunfeng Qiu, Yihong Gong
This paper proposes to handle the practical problem of learning a universal model for crowd counting across scenes and datasets. We dissect that the crux of this problem is the catastrophicExpand


Stochastic Region Pooling: Make Attention More Expressive
A novel method for channel-wise attention network, called Stochastic Region Pooling (SRP), is proposed, which makes the channel descriptors more representative and diversity by encouraging the feature map to have more or wider important feature responses. Expand
CBAM: Convolutional Block Attention Module
The proposed Convolutional Block Attention Module (CBAM), a simple yet effective attention module for feed-forward convolutional neural networks, can be integrated into any CNN architectures seamlessly with negligible overheads and is end-to-end trainable along with base CNNs. Expand
Densely Connected Convolutional Networks
The Dense Convolutional Network (DenseNet), which connects each layer to every other layer in a feed-forward fashion, and has several compelling advantages: they alleviate the vanishing-gradient problem, strengthen feature propagation, encourage feature reuse, and substantially reduce the number of parameters. Expand
ImageNet Large Scale Visual Recognition Challenge
The creation of this benchmark dataset and the advances in object recognition that have been possible as a result are described, and the state-of-the-art computer vision accuracy with human accuracy is compared. Expand
The Caltech-UCSD Birds-200-2011 Dataset
CUB-200-2011 is an extended version of CUB-200 [7], a challenging dataset of 200 bird species. The extended version roughly doubles the number of images per category and adds new part localizationExpand
Squeeze-and-Excitation Networks
This work proposes a novel architectural unit, which is term the “Squeeze-and-Excitation” (SE) block, that adaptively recalibrates channel-wise feature responses by explicitly modelling interdependencies between channels and shows that these blocks can be stacked together to form SENet architectures that generalise extremely effectively across different datasets. Expand
Attention Augmented Convolutional Networks
It is found that Attention Augmentation leads to consistent improvements in image classification on ImageNet and object detection on COCO across many different models and scales, including ResNets and a state-of-the art mobile constrained network, while keeping the number of parameters similar. Expand
Microsoft COCO: Common Objects in Context
We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of sceneExpand
Pattern recognition and machine learning, Springer Science+Business Media LLC
  • 2006
Attention-Based Dropout Layer for Weakly Supervised Single Object Localization and Semantic Segmentation
An attention-based dropout layer is proposed, which utilizes the attention mechanism to locate the entire object efficiently and effectively improves the weakly supervised single object localization accuracy, thereby achieving a new state-of-the-art localization accuracy on the CUB-200-2011 and a comparable accuracy existing state of thearts on the ImageNet-1k. Expand