Regularized Pooling

  title={Regularized Pooling},
  author={Takato Otsuzuki and Hideaki Hayashi and Yuchen Zheng and Seiichi Uchida},
In convolutional neural networks (CNNs), pooling operations play important roles such as dimensionality reduction and deformation compensation. In general, max pooling, which is the most widely used operation for local pooling, is performed independently for each kernel. However, the deformation may be spatially smooth over the neighboring kernels. This means that max pooling is too flexible to compensate for actual deformations. In other words, its excessive flexibility risks canceling the… 

Pooling Operations in Deep Learning: From “Invariable” to “Variable”

The pooling domain and pooling kernel in the pooling operation is analyzed and the expression form is standardized, from the perspective of “invariable to “variable,” and the four types of pooled operation are summarized and discussed with their advantages and disadvantages.

Graph neural networks for laminar flow prediction around random two-dimensional shapes

This work has proposed a GCNN structure as a surrogate model for laminar flow prediction around 2D obstacles, which can be directly applied on body-fitted triangular meshes, hence yielding an easy coupling with CFD solvers.

Meta-learning of Pooling Layers for Character Recognition

A parameterized pooling layer is proposed in which the kernel shape and pooling operation are trainable using two parameters, thereby allowing flexible pooling of the input data and improving the performance of the model in both few-shot character recognition and noisy image recognition tasks.



Fractional Max-Pooling

The form of fractional max-pooling formulated is found to reduce overfitting on a variety of datasets: for instance, it improves on the state of the art for CIFAR-100 without even using dropout.

Global Feature Guided Local Pooling

  • Takumi Kobayashi
  • Computer Science
    2019 IEEE/CVF International Conference on Computer Vision (ICCV)
  • 2019
A flexible pooling method which adaptively tunes the pooling functionality based on input features without manually fixing it beforehand is proposed and effectively works in the CNNs trained in an end-to-end manner.

LIP: Local Importance-Based Pooling

This paper proposes a conceptually simple, general, and effective pooling layer based on local importance modeling, termed as Local Importance-based Pooling (LIP), which can automatically enhance discriminative features during the downsampling procedure by learning adaptive importance weights based on inputs.

Building Detail-Sensitive Semantic Segmentation Networks With Polynomial Pooling

  • Zhen WeiJingyi Zhang L. Shao
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
A polynomial pooling (P-pooling) function is proposed that finds an intermediate form between max and average pooling to provide an optimally balanced and self-adjusted pooling strategy for semantic segmentation.

REMAP: Multi-Layer Entropy-Guided Pooling of Dense CNN Features for Image Retrieval

This paper proposes a novel CNN-based global descriptor, called REMAP, which learns and aggregates a hierarchy of deep features from multiple CNN layers, and is trained end-to-end with a triplet loss, and shows that such relative entropy-guided aggregation outperforms classicalCNN-based aggregation controlled by SGD.

A Simple Pooling-Based Design for Real-Time Salient Object Detection

This work solves the problem of salient object detection by investigating how to expand the role of pooling in convolutional neural networks by building a global guidance module (GGM) and designing a feature aggregation module (FAM) to make the coarse-level semantic information well fused with the fine-level features from the top-down path- way.

Local Temporal Bilinear Pooling for Fine-Grained Action Parsing

A novel bilinear pooling operation is proposed, which is used in intermediate layers of a temporal convolutional encoder-decoder net and is learnable and hence can capture more complex local statistics than the conventional counterpart.

Object Counting with Small Datasets of Large Images

The GSP approach achieves state-of-the-art performance on all four datasets and GSP models trained with smaller-sized image patches localize objects better than their GAP counterparts.