• Corpus ID: 6949717

Stochastic Pooling for Regularization of Deep Convolutional Neural Networks

@article{Zeiler2013StochasticPF,
  title={Stochastic Pooling for Regularization of Deep Convolutional Neural Networks},
  author={Matthew D. Zeiler and Rob Fergus},
  journal={CoRR},
  year={2013},
  volume={abs/1301.3557}
}
We introduce a simple and effective method for regularizing large convolutional neural networks. [...] Key Method The approach is hyper-parameter free and can be combined with other regularization approaches, such as dropout and data augmentation. We achieve state-of-the-art performance on four image datasets, relative to other approaches that do not utilize data augmentation.Expand
Max-Pooling Dropout for Regularization of Convolutional Neural Networks
TLDR
This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time, and advocates employing the proposed probabilistic weighted pooling, instead of commonly used max- Pooling, to act as model averaging at test time.
Restricted stochastic pooling for convolutional neural network
TLDR
This paper proposes a novel pooling layer named restricted stochastic Pooling layer which can not only obtain representative activation but also add randomness to the model through good performance on SVHN and CIFAR-10.
General Stochastic Networks for Classification
TLDR
This work introduces a hybrid training objective considering a generative and discriminative cost function governed by a trade-off parameter λ, and uses a new variant of network training involving noise injection, i.e. walkback training, to jointly optimize multiple network layers.
REGP: A NEW POOLING ALGORITHM FOR DEEP CONVOLUTIONAL NEURAL NETWORKS
TLDR
The main idea of this approach is finding the most distinguishing parts in regions of the input by investigating neighborhood regions to construct the pooled representation of deep convolutional neural networks.
An Improved Pooling Scheme for Convolutional Neural Networks
TLDR
Experiments demonstrate that improved performance with Accept-Reject Pooling as compared to the use of several pooling methods such as max, stochastic and mixed pooling on benchmark image classification datasets.
A Novel Pooling Method for Regularization of Deep Neural networks
TLDR
Experimental results on several image benchmarks show that Spectral Dropout Pooling outperforms the existing pooling methods in classification performance as well as is effective for improving the generalization ability of DCNNs.
A New Pooling Method For Improvement OfGeneralization Ability In Deep Convolutional Neural Networks
TLDR
Experimental results indicate that the new pooling method named l pooling outperforms the existing pooling techniques in classification performance as well as is efficient for enhancing the generalization capability of DCNNs.
Generalizing Pooling Functions in Convolutional Neural Networks: Mixed, Gated, and Tree
TLDR
The proposed pooling operations provide a boost in invariance properties relative to conventional pooling and set the state of the art on several widely adopted benchmark datasets; they are also easy to implement, and can be applied within various deep neural network architectures.
Mixed Pooling for Convolutional Neural Networks
TLDR
A novel feature pooling method is proposed to regularize CNNs, which replaces the deterministic pooling operations with a stochastic procedure by randomly using the conventional max pooling and average pooling methods.
Towards dropout training for convolutional neural networks
TLDR
It is demonstrated that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time, and proposed probabilistic weighted pooling is advocated, instead of commonly used max- Pooling, to act as model averaging at test time.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 19 REFERENCES
Flexible, High Performance Convolutional Neural Networks for Image Classification
We present a fast, fully parameterizable GPU implementation of Convolutional Neural Network variants. Our feature extractors are neither carefully designed nor pre-wired, but rather learned in a
Adaptive deconvolutional networks for mid and high level feature learning
TLDR
A hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling, relying on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches.
Neural Networks: Tricks of the Trade
TLDR
It is shown how nonlinear semi-supervised embedding algorithms popular for use with â œshallowâ learning techniques such as kernel methods can be easily applied to deep multi-layer architectures.
Beyond spatial pyramids: Receptive field learning for pooled image features
TLDR
This paper shows that learning more adaptive receptive fields increases performance even with a significantly smaller codebook size at the coding layer, and adopts the idea of over-completeness to learn the optimal pooling parameters.
Convolutional neural networks applied to house numbers digit classification
TLDR
This work augmented the traditional ConvNet architecture by learning multi-stage features and by using Lp pooling and establishes a new state-of-the-art of 95.10% accuracy on the SVHN dataset (48% error improvement).
Learning Multiple Layers of Features from Tiny Images
TLDR
It is shown how to train a multi-layer generative model that learns to extract meaningful features which resemble those found in the human visual cortex, using a novel parallelization algorithm to distribute the work among multiple machines connected on a network.
Improving neural networks by preventing co-adaptation of feature detectors
When a large feedforward neural network is trained on a small training set, it typically performs poorly on held-out test data. This "overfitting" is greatly reduced by randomly omitting half of the
Best practices for convolutional neural networks applied to visual document analysis
TLDR
A set of concrete bestpractices that document analysis researchers can use to get good results with neural networks, including a simple "do-it-yourself" implementation of convolution with a flexible architecture suitable for many visual document problems.
Rectified Linear Units Improve Restricted Boltzmann Machines
TLDR
Restricted Boltzmann machines were developed using binary stochastic hidden units that learn features that are better for object recognition on the NORB dataset and face verification on the Labeled Faces in the Wild dataset.
Reading Digits in Natural Images with Unsupervised Feature Learning
TLDR
A new benchmark dataset for research use is introduced containing over 600,000 labeled digits cropped from Street View images, and variants of two recently proposed unsupervised feature learning methods are employed, finding that they are convincingly superior on benchmarks.
...
1
2
...