Random Erasing Data Augmentation
@inproceedings{Zhong2017RandomED, title={Random Erasing Data Augmentation}, author={Zhun Zhong and Liang Zheng and Guoliang Kang and Shaozi Li and Yi Yang}, booktitle={AAAI Conference on Artificial Intelligence}, year={2017} }
In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with…
Figures and Tables from this paper
1,980 Citations
Data Augmentation Using Mixup and Random Erasing
- Computer Science2022 IEEE International Conference on Networking, Sensing and Control (ICNSC)
- 2022
This paper proposes two different combinations of these two methods, namely RSM and RDM, to compensate their respective shortcomings for object detection and image classification on various datasets.
Region-aware Random Erasing
- Computer Science2019 IEEE 19th International Conference on Communication Technology (ICCT)
- 2019
Range-aware Random Erasing data augment method is proposed, which can not only enlarge the training dataset to reduce overfitting without discarding objects, but also reduce the impact of background information.
Stride Random Erasing Augmentation
- Computer ScienceArtificial Intelligence, Soft Computing and Applications
- 2022
A new method for data augmentation called Stride Random Erasing Augmentation (SREA) to improve classification performance, which outperforms the baseline and random erasing especially on the fashion-MNIST dataset.
Data Augmentation Using Random Image Cropping and Patching for Deep CNNs
- Computer ScienceIEEE Transactions on Circuits and Systems for Video Technology
- 2020
A new data augmentation technique called random image cropping and patching (RICAP) which randomly crops four images and patches them to create a new training image and achieves a new state-of-the-art test error of 2.19% on CIFAR-10.
RICAP: Random Image Cropping and Patching Data Augmentation for Deep CNNs
- Computer ScienceACML
- 2018
A new data augmentation technique called random image cropping and patching (RICAP), which randomly crops four images and patches them to construct a new training image, enriching the variety of training images.
Image data augmentation method based on maximum activation point guided erasure
- Computer Science2020 2nd International Conference on Advances in Computer Technology, Information Science and Communications (CTISC)
- 2020
The image erasure method based on the maximum activation point guidance only needs to modify the input image, which can effectively improve the robustness of the model to occluded image recognition, and can be integrated with various network structures.
KeepAugment: A Simple Information-Preserving Data Augmentation Approach
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
This paper empirically shows that the standard data augmentation methods may introduce distribution shift and consequently hurt the performance on unaugmented data during inference, and proposes a simple yet effective approach, dubbed KeepAugment, to increase the fidelity of augmented images.
Local Magnification for Data and Feature Augmentation
- Computer ScienceArXiv
- 2022
Experiments show that the proposed LOMA, though straightforward, can be combined with standard data augmentation to significantly improve the performance on image classi-cation and object detection and can continue to strengthen the model and outperform advanced intensity transformation methods forData augmentation.
Random Polygon Cover for Oracle Bone Character Recognition
- Computer Science2021 5th International Conference on Computer Science and Artificial Intelligence
- 2021
This work proposes random polygon cover algorithm to simulate the possible damage object and partial content loss in training dataset, which is also a data augmentation technique.
Feature transforms for image data augmentation
- Materials ScienceNeural Computing and Applications
- 2022
A problem with convolutional neural networks (CNNs) is that they require large datasets to obtain adequate robustness; on small datasets, they are prone to overfitting. Many methods have been…
References
SHOWING 1-10 OF 52 REFERENCES
PatchShuffle Regularization
- Computer ScienceArXiv
- 2017
Experiments on four representative classification datasets show that PatchShuffle improves the generalization ability of CNN especially when the data is scarce and empirically illustrate that CNN models trained withPatchShuffle are more robust to noise and local changes in an image.
DisturbLabel: Regularizing CNN on the Loss Layer
- Computer Science2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2016
An extremely simple algorithm which randomly replaces a part of labels as incorrect values in each iteration, which prevents the network training from over-fitting by implicitly averaging over exponentially many networks which are trained with different label sets.
Improved Regularization of Convolutional Neural Networks with Cutout
- Computer ScienceArXiv
- 2017
This paper shows that the simple regularization technique of randomly masking out square regions of input during training, which is called cutout, can be used to improve the robustness and overall performance of convolutional neural networks.
Very Deep Convolutional Networks for Large-Scale Image Recognition
- Computer ScienceICLR
- 2015
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in Vitro
- Computer Science2017 IEEE International Conference on Computer Vision (ICCV)
- 2017
A simple semisupervised pipeline that only uses the original training set without collecting extra data, which effectively improves the discriminative ability of learned CNN embeddings and proposes the label smoothing regularization for outliers (LSRO).
Stochastic Pooling for Regularization of Deep Convolutional Neural Networks
- Computer ScienceICLR
- 2013
We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly…
Aggregated Residual Transformations for Deep Neural Networks
- Computer Science2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
On the ImageNet-1K dataset, it is empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy and is more effective than going deeper or wider when the authors increase the capacity.
Gated Siamese Convolutional Neural Network Architecture for Human Re-identification
- Computer ScienceECCV
- 2016
A gating function is proposed to selectively emphasize such fine common local patterns that may be essential to distinguish positive pairs from hard negative pairs by comparing the mid-level features across pairs of images.
In Defense of the Triplet Loss for Person Re-Identification
- Computer ScienceArXiv
- 2017
It is shown that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin.
DeepReID: Deep Filter Pairing Neural Network for Person Re-identification
- Computer Science2014 IEEE Conference on Computer Vision and Pattern Recognition
- 2014
A novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter is proposed and significantly outperforms state-of-the-art methods on this dataset.