Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks

  title={Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks},
  author={Çaglar G{\"u}lçehre and Kyunghyun Cho and Razvan Pascanu and Yoshua Bengio},
  • Çaglar Gülçehre, Kyunghyun Cho, +1 author Yoshua Bengio
  • Published in ECML/PKDD 2014
  • Computer Science, Mathematics
  • In this paper we propose and investigate a novel nonlinear unit, called L p unit, for deep neural networks. The proposed L p unit receives signals from several projections of a subset of units in the layer below and computes a normalized L p norm. We notice two interesting interpretations of the L p unit. First, the proposed unit can be understood as a generalization of a number of conventional pooling operators such as average, root-mean-square and max pooling widely used in, for instance… CONTINUE READING
    97 Citations
    Generalizing Pooling Functions in CNNs: Mixed, Gated, and Tree
    • 44
    • PDF
    A Fully Trainable Network with RNN-based Pooling
    • 11
    • PDF
    α-Integration Pooling for Convolutional Neural Networks
    • Highly Influenced
    Alpha-Integration Pooling for Convolutional Neural Networks.
    • 1
    • Highly Influenced
    Alpha-Pooling for Convolutional Neural Networks
    • 5
    • PDF
    Striving for Simplicity: The All Convolutional Net
    • 2,444
    • PDF
    Systematic evaluation of convolution neural network advances on the Imagenet
    • 147
    • PDF
    Hartley Spectral Pooling for Deep Learning
    • 7
    • PDF
    SimNets: A Generalization of Convolutional Networks
    • 18
    • PDF
    Multi Layer Neural Networks as Replacement for Pooling Operations
    • 4
    • PDF


    How to Construct Deep Recurrent Neural Networks
    • 656
    • PDF
    Deep Learning of Representations
    • 60
    • PDF
    Revisiting Natural Gradient for Deep Networks
    • 218
    • PDF
    Knowledge Matters: Importance of Prior Information for Optimization
    • 119
    • PDF
    Complex cell pooling and the statistics of natural images
    • 57
    • PDF
    On the difficulty of training recurrent neural networks
    • 3,001
    • PDF
    Gradient-based learning applied to document recognition
    • 27,033
    • PDF