Random Erasing Data Augmentation

  title={Random Erasing Data Augmentation},
  author={Zhun Zhong and Liang Zheng and Guoliang Kang and Shaozi Li and Yi Yang},
  booktitle={AAAI Conference on Artificial Intelligence},
In this paper, we introduce Random Erasing, a new data augmentation method for training the convolutional neural network (CNN). In training, Random Erasing randomly selects a rectangle region in an image and erases its pixels with random values. In this process, training images with various levels of occlusion are generated, which reduces the risk of over-fitting and makes the model robust to occlusion. Random Erasing is parameter learning free, easy to implement, and can be integrated with… 

Data Augmentation Using Mixup and Random Erasing

This paper proposes two different combinations of these two methods, namely RSM and RDM, to compensate their respective shortcomings for object detection and image classification on various datasets.

Region-aware Random Erasing

Range-aware Random Erasing data augment method is proposed, which can not only enlarge the training dataset to reduce overfitting without discarding objects, but also reduce the impact of background information.

Stride Random Erasing Augmentation

A new method for data augmentation called Stride Random Erasing Augmentation (SREA) to improve classification performance, which outperforms the baseline and random erasing especially on the fashion-MNIST dataset.

Data Augmentation Using Random Image Cropping and Patching for Deep CNNs

A new data augmentation technique called random image cropping and patching (RICAP) which randomly crops four images and patches them to create a new training image and achieves a new state-of-the-art test error of 2.19% on CIFAR-10.

RICAP: Random Image Cropping and Patching Data Augmentation for Deep CNNs

A new data augmentation technique called random image cropping and patching (RICAP), which randomly crops four images and patches them to construct a new training image, enriching the variety of training images.

Image data augmentation method based on maximum activation point guided erasure

The image erasure method based on the maximum activation point guidance only needs to modify the input image, which can effectively improve the robustness of the model to occluded image recognition, and can be integrated with various network structures.

KeepAugment: A Simple Information-Preserving Data Augmentation Approach

This paper empirically shows that the standard data augmentation methods may introduce distribution shift and consequently hurt the performance on unaugmented data during inference, and proposes a simple yet effective approach, dubbed KeepAugment, to increase the fidelity of augmented images.

Local Magnification for Data and Feature Augmentation

Experiments show that the proposed LOMA, though straightforward, can be combined with standard data augmentation to significantly improve the performance on image classi-cation and object detection and can continue to strengthen the model and outperform advanced intensity transformation methods forData augmentation.

Random Polygon Cover for Oracle Bone Character Recognition

  • Liu Dazheng
  • Computer Science
    2021 5th International Conference on Computer Science and Artificial Intelligence
  • 2021
This work proposes random polygon cover algorithm to simulate the possible damage object and partial content loss in training dataset, which is also a data augmentation technique.

Feature transforms for image data augmentation

A problem with convolutional neural networks (CNNs) is that they require large datasets to obtain adequate robustness; on small datasets, they are prone to overfitting. Many methods have been



PatchShuffle Regularization

Experiments on four representative classification datasets show that PatchShuffle improves the generalization ability of CNN especially when the data is scarce and empirically illustrate that CNN models trained withPatchShuffle are more robust to noise and local changes in an image.

DisturbLabel: Regularizing CNN on the Loss Layer

An extremely simple algorithm which randomly replaces a part of labels as incorrect values in each iteration, which prevents the network training from over-fitting by implicitly averaging over exponentially many networks which are trained with different label sets.

Improved Regularization of Convolutional Neural Networks with Cutout

This paper shows that the simple regularization technique of randomly masking out square regions of input during training, which is called cutout, can be used to improve the robustness and overall performance of convolutional neural networks.

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

Unlabeled Samples Generated by GAN Improve the Person Re-identification Baseline in Vitro

A simple semisupervised pipeline that only uses the original training set without collecting extra data, which effectively improves the discriminative ability of learned CNN embeddings and proposes the label smoothing regularization for outliers (LSRO).

Stochastic Pooling for Regularization of Deep Convolutional Neural Networks

We introduce a simple and effective method for regularizing large convolutional neural networks. We replace the conventional deterministic pooling operations with a stochastic procedure, randomly

Aggregated Residual Transformations for Deep Neural Networks

On the ImageNet-1K dataset, it is empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy and is more effective than going deeper or wider when the authors increase the capacity.

Gated Siamese Convolutional Neural Network Architecture for Human Re-identification

A gating function is proposed to selectively emphasize such fine common local patterns that may be essential to distinguish positive pairs from hard negative pairs by comparing the mid-level features across pairs of images.

In Defense of the Triplet Loss for Person Re-Identification

It is shown that, for models trained from scratch as well as pretrained ones, using a variant of the triplet loss to perform end-to-end deep metric learning outperforms most other published methods by a large margin.

DeepReID: Deep Filter Pairing Neural Network for Person Re-identification

A novel filter pairing neural network (FPNN) to jointly handle misalignment, photometric and geometric transforms, occlusions and background clutter is proposed and significantly outperforms state-of-the-art methods on this dataset.