Corpus ID: 219687413

PatchUp: A Regularization Technique for Convolutional Neural Networks

@article{Faramarzi2020PatchUpAR,
  title={PatchUp: A Regularization Technique for Convolutional Neural Networks},
  author={M. Faramarzi and M. Amini and Akilesh Badrinaaraayanan and Vikas Verma and A. Chandar},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.07794}
}
  • M. Faramarzi, M. Amini, +2 authors A. Chandar
  • Published 2020
  • Computer Science, Mathematics
  • ArXiv
  • Large capacity deep learning models are often prone to a high generalization gap when trained with a limited amount of labeled training data. A recent class of methods to address this problem uses various ways to construct a new training sample by mixing a pair (or more) of training samples. We propose PatchUp, a hidden state block-level regularization technique for Convolutional Neural Networks (CNNs), that is applied on selected contiguous blocks of feature maps from a random pair of samples… CONTINUE READING
    Review: Deep Learning in Electron Microscopy

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 24 REFERENCES
    Improved Regularization of Convolutional Neural Networks with Cutout
    599
    DropBlock: A regularization method for convolutional networks
    199
    CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features
    145
    ImageNet classification with deep convolutional neural networks
    50698
    MixUp as Locally Linear Out-Of-Manifold Regularization
    44
    Learning Multiple Layers of Features from Tiny Images
    8731
    Dropout: a simple way to prevent neural networks from overfitting
    18427
    Deep Residual Learning for Image Recognition
    47956
    Manifold Mixup: Better Representations by Interpolating Hidden States
    109
    Intriguing properties of neural networks
    4968