• Corpus ID: 211258631

PDA: Progressive Data Augmentation for General Robustness of Deep Neural Networks

@inproceedings{Yu2019PDAPD,
  title={PDA: Progressive Data Augmentation for General Robustness of Deep Neural Networks},
  author={Hang Yu and Aishan Liu and Xianglong Liu and Gen Li and Ping Luo and Ruozhen Cheng and Jichen Yang and Chongzhi Zhang},
  year={2019}
}
Adversarial images are designed to mislead deep neural networks (DNNs), attracting great attention in recent years. Although several defense strategies achieved encouraging robustness against adversarial samples, most of them fail to improve the robustness on common corruptions such as noise, blur, and weather/digital effects (e.g. frost, pixelate). To address this problem, we propose a simple yet effective method, named Progressive Data Augmentation (PDA), which enables general robustness of… 

Learning Self-Regularized Adversarial Views for Self-Supervised Vision Transformers

This work reduces the search cost of AutoView to nearly zero by learning views and network parameters simultaneously in a single forward-backward step, minimizing and maximizing the mutual information among different augmented views, respectively, and proposes a self-regularized loss term to guarantee the information propagation.

Robustness Should Not Be at Odds with Accuracy

The phenomenon of adversarial examples in deep learning models has caused substantial concern over their reliability and trustworthiness: in many instances an imperceptible perturbation can falsely

Progressive 3-Layered Block Architecture for Image Classification

  • M. GogoiS. Begum
  • Computer Science
    International Journal of Advanced Computer Science and Applications
  • 2022
A “Progressive 3-Layered Block Architecture" model is proposed in this paper which considers the fine-tuning of hyperparameters and optimizers of the Deep network to achieve state-of-the-art accuracy on benchmark datasets with fewer parameters.

ParaPose: Parameter and Domain Randomization Optimization for Pose Estimation using Synthetic Data

The developed approach shows state-of-the-art performance of 82.0 % recall on the challenging OCCLUSION dataset, outperforming all previous methods with a large margin, and proves the validity of automatic set-up of pose estimation using purely synthetic data.

Holistic Deep Learning

This paper addresses the problem of constructing holistic deep learning models by proposing a novel formulation that solves these issues in combination by improving the accuracy, robustness, stability, and sparsity over traditionaldeep learning models among many others.

On the (Un-)Avoidability of Adversarial Examples

This work carefully argues that adversarial robustness should be defined as a locally adaptive measure complying with the underlying distribution, and suggests a definition for an adaptive robust loss, derive an empirical version of it, and develop a resulting data-augmentation framework.

EfficientNetV2: Smaller Models and Faster Training

An improved method of progressive learning, which adaptively adjusts regularization (e.g., dropout and data augmentation) along with image size is proposed, which significantly outperforms previous models on ImageNet and CIFAR/Cars/Flowers datasets.

Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems

Challenging the Adversarial Robustness of DNNs Based on Error-Correcting Output Codes

An in-depth investigation of the adversarial robustness achieved by the ECOC approach is carried out, proposing a new adversarial attack specifically designed for multilabel classification architectures, like theECOC-based one, and applying two existing attacks.

References

SHOWING 1-10 OF 40 REFERENCES

Adversarial Examples Are a Natural Consequence of Test Error in Noise

It is suggested that improving adversarial robustness should go hand in hand with improving performance in the presence of more general and realistic image corruptions, and that future adversarial defenses consider evaluating the robustness of their methods to distributional shift with benchmarks such as Imagenet-C.

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

This paper standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications, and proposes a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.

Towards Deep Learning Models Resistant to Adversarial Attacks

This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

ImageNet classification with deep convolutional neural networks

A large, deep convolutional neural network was trained to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes and employed a recently developed regularization method called "dropout" that proved to be very effective.

You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle

It is shown that adversarial training can be cast as a discrete time differential game, and the proposed algorithm YOPO (You Only Propagate Once) can achieve comparable defense accuracy with approximately 1/5 ~ 1/4 GPU time of the projected gradient descent (PGD) algorithm.

Adversarial examples in the physical world

It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.

Perceptual-Sensitive GAN for Generating Adversarial Patches

This paper proposes a perceptual-sensitive generative adversarial network (PS-GAN) that can simultaneously enhance the visual fidelity and the attacking ability for the adversarial patch, and treats the patch generation as a patch-to-patch translation via an adversarial process.

A Fourier Perspective on Model Robustness in Computer Vision

AutoAugment, a recently proposed data augmentation policy optimized for clean accuracy, achieves state-of-the-art robustness on the CIFAR-10-C benchmark and is observed to use a more diverse set of augmentations than previously observed.

Using learned optimizers to make models robust to input noise

The possibility of meta-training a learned optimizer that can train image classification models such that they are robust to common image corruptions is explored to suggest that meta-learning provides a novel approach for studying and improving the robustness of deep learning models.