Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks
@inproceedings{Glehre2014LearnedNormPF, title={Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks}, author={Çaglar G{\"u}lçehre and Kyunghyun Cho and Razvan Pascanu and Yoshua Bengio}, booktitle={ECML/PKDD}, year={2014} }
In this paper we propose and investigate a novel nonlinear unit, called L p unit, for deep neural networks. The proposed L p unit receives signals from several projections of a subset of units in the layer below and computes a normalized L p norm. We notice two interesting interpretations of the L p unit. First, the proposed unit can be understood as a generalization of a number of conventional pooling operators such as average, root-mean-square and max pooling widely used in, for instance… CONTINUE READING
Figures, Tables, and Topics from this paper
97 Citations
Generalizing Pooling Functions in CNNs: Mixed, Gated, and Tree
- Computer Science, Medicine
- IEEE Transactions on Pattern Analysis and Machine Intelligence
- 2018
- 44
- PDF
Systematic evaluation of convolution neural network advances on the Imagenet
- Computer Science
- Comput. Vis. Image Underst.
- 2017
- 147
- PDF
References
SHOWING 1-10 OF 56 REFERENCES
ImageNet classification with deep convolutional neural networks
- Computer Science
- Commun. ACM
- 2012
- 59,699
- PDF
On Fast Dropout and its Applicability to Recurrent Networks
- Computer Science, Mathematics
- ICLR
- 2014
- 51
- PDF
Knowledge Matters: Importance of Prior Information for Optimization
- Computer Science, Mathematics
- J. Mach. Learn. Res.
- 2016
- 119
- PDF
On the difficulty of training recurrent neural networks
- Computer Science, Mathematics
- ICML
- 2013
- 3,001
- PDF