• Corpus ID: 208637407

AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty

@article{Hendrycks2020AugMixAS,
  title={AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty},
  author={Dan Hendrycks and Norman Mu and Ekin Dogus Cubuk and Barret Zoph and Justin Gilmer and Balaji Lakshminarayanan},
  journal={ArXiv},
  year={2020},
  volume={abs/1912.02781}
}
Modern deep neural networks can achieve high accuracy when the training distribution and test distribution are identically distributed, but this assumption is frequently violated in practice. When the train and test distributions are mismatched, accuracy can plummet. Currently there are few techniques that improve robustness to unforeseen data shifts encountered during deployment. In this work, we propose a technique to improve the robustness and uncertainty estimates of image classifiers. We… 
Misclassification-Aware Gaussian Smoothing and Mixed Augmentations improves Robustness against Domain Shifts
TLDR
This paper proposes a misclassification-aware Gaussian smoothing approach, coupled with mixed data augmentations, for improving robustness of image classifiers against a variety of corruptions while still maintaining high clean accuracy.
Improving robustness against common corruptions with frequency biased models
TLDR
A mixture of two expert models specializing in high and low-frequency robustness, respectively are introduced and a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high- frequencies robustness is proposed.
DJMIX: UNSUPERVISED TASK-AGNOSTIC AUGMEN-
  • 2020
Convolutional Neural Networks (CNNs) are vulnerable to unseen noise on input images at the test time, and thus improving the robustness is crucial. In this paper, we propose DJMix, a data
SmoothMix: a Simple Yet Effective Data Augmentation to Train Robust Classifiers
TLDR
Smoothmix is introduced in which blending of images is done based on soft edges and the training labels are computed accordingly, which significantly increases the robustness of a network against image corruption which is validated by the experiments carried out on CIFAR-100-C & ImageNet-C corruption datasets.
An Effective Anti-Aliasing Approach for Residual Networks
TLDR
This work shows that it can mitigate Frequency aliasing by placing non-trainable blur filters and using smooth activation functions at key locations, particularly where networks lack the capacity to learn them, and lead to substantial improvements in out-of-distribution generalization on both image classification under natural corruptions on ImageNet-C and few-shot learning on Meta-Dataset.
Distribution-preserving data augmentation
TLDR
A novel distribution-preserving data augmentation method is proposed that creates plausible image variations by shifting pixel colors to another point in the image color distribution by defining a regularized density decreasing direction to create paths from the original pixels’ color to the distribution tails.
KeepAugment: A Simple Information-Preserving Data Augmentation Approach
TLDR
This paper empirically shows that the standard data augmentation methods may introduce distribution shift and consequently hurt the performance on unaugmented data during inference, and proposes a simple yet effective approach, dubbed KeepAugment, to increase the fidelity of augmented images.
Does Data Augmentation Benefit from Split BatchNorms
TLDR
A recently proposed training paradigm is explored using an auxiliary BatchNorm for the potentially out-of-distribution, strongly augmented images, and this method significantly improves the performance of common image classification benchmarks such as CIFar-10, CIFAR-100, and ImageNet.
StyleLess layer: Improving robustness for real-world driving
TLDR
This work proposes a novel type of layer, dubbed StyleLess, which enables DNNs to learn robust and informative features that can cope with varying external conditions, and proposes multiple variations of this layer that can be integrated in most of the architectures and trained jointly with the main task.
Unsupervised Anomaly Detection Based on Data Augmentation and Mixing
TLDR
The effectiveness of the proposal method for improving the accuracy of unsupervised anomaly detection is confirmed and the diversity of training data can be enhanced, compared with performing the same image processing.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 48 REFERENCES
Improving Robustness Without Sacrificing Accuracy with Patch Gaussian Augmentation
TLDR
This work introduces Patch Gaussian, a simple augmentation scheme that adds noise to randomly selected patches in an input image that leads to reduced sensitivity to high frequency noise(similar to Gaussian) while retaining the ability to take advantage of relevant high frequency information in the image.
Improved Regularization of Convolutional Neural Networks with Cutout
TLDR
This paper shows that the simple regularization technique of randomly masking out square regions of input during training, which is called cutout, can be used to improve the robustness and overall performance of convolutional neural networks.
A Fourier Perspective on Model Robustness in Computer Vision
TLDR
AutoAugment, a recently proposed data augmentation policy optimized for clean accuracy, achieves state-of-the-art robustness on the CIFAR-10-C benchmark and is observed to use a more diverse set of augmentations than previously observed.
Improving the Robustness of Deep Neural Networks via Stability Training
TLDR
This paper presents a general stability training method to stabilize deep networks against small input distortions that result from various types of common image processing, such as compression, rescaling, and cropping.
An Empirical Evaluation on Robustness and Uncertainty of Regularization Methods
TLDR
Extensive empirical evaluations on the robustness and uncertainty estimates of image classifiers trained with state-of-the-art regularization methods and experimental results show that certainRegularization methods can serve as strong baseline methods for robusts and uncertainty estimation of DNNs.
CutMix: Regularization Strategy to Train Strong Classifiers With Localizable Features
TLDR
Patches are cut and pasted among training images where the ground truth labels are also mixed proportionally to the area of the patches, and CutMix consistently outperforms state-of-the-art augmentation strategies on CIFAR and ImageNet classification tasks, as well as on ImageNet weakly-supervised localization task.
Making Convolutional Networks Shift-Invariant Again
TLDR
This work demonstrates that anti-aliasing by low-pass filtering before downsampling, a classical signal processing technique has been undeservingly overlooked in modern deep networks, is compatible with existing architectural components, such as max-pooling and strided-convolution.
Generalisation in humans and deep neural networks
TLDR
The robustness of humans and current convolutional deep neural networks on object recognition under twelve different types of image degradations is compared and it is shown that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on.
Examining the Impact of Blur on Recognition by Convolutional Networks
TLDR
It is found that by fine-tuning on a diverse mix of blurred images, convolutional neural networks can in fact learn to generate a blur invariant representation in their hidden layers.
AutoAugment: Learning Augmentation Policies from Data
TLDR
This paper describes a simple procedure called AutoAugment to automatically search for improved data augmentation policies, which achieves state-of-the-art accuracy on CIFAR-10, CIFar-100, SVHN, and ImageNet (without additional data).
...
1
2
3
4
5
...