CorrGAN: Input Transformation Technique Against Natural Corruptions

@article{Haque2022CorrGANIT,
  title={CorrGAN: Input Transformation Technique Against Natural Corruptions},
  author={Mirazul Haque and Christof J. Budnik and Wei Yang},
  journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
  year={2022},
  pages={193-196}
}
  • Mirazul HaqueC. BudnikWei Yang
  • Published 19 April 2022
  • Computer Science
  • 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
Because of the increasing accuracy of Deep Neural Networks (DNNs) on different tasks, a lot of real times systems are utilizing DNNs. These DNNs are vulnerable to adversarial perturbations and corruptions. Specifically, natural corruptions like fog, blur, contrast etc can affect the prediction of DNN in an autonomous vehicle. In real time, these corruptions are needed to be detected and also the corrupted inputs are needed to be denoised to be predicted correctly. In this work, we propose… 

Figures from this paper

References

SHOWING 1-10 OF 12 REFERENCES

Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models

The proposed Defense-GAN, a new framework leveraging the expressive capability of generative models to defend deep neural networks against adversarial perturbations, is empirically shown to be consistently effective against different attack methods and improves on existing defense strategies.

GraN: An Efficient Gradient-Norm Based Detector for Adversarial and Misclassified Examples

GraN is a time- and parameter-efficient method that is easily adaptable to any DNN, based on the layer-wise norm of the DNN's gradient regarding the loss of the current input-output combination, which can be computed via backpropagation.

On Detecting Adversarial Inputs with Entropy of Saliency Maps

It is demonstrated that quantitative and qualitative evaluation of adversarial saliency maps through Shannon entropy can be an efficient, effective way of detecting adversarial attacks, especially in deep neural networks with a linear nature.

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

This paper standardizes and expands the corruption robustness topic, while showing which classifiers are preferable in safety-critical applications, and proposes a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.

ILFO: Adversarial Attack on Adaptive Neural Networks

This paper proposes ILFO (Intermediate Output-Based Loss Function Optimization) attack against a common type of energy-saving neural networks, Adaptive Neural Networks (AdNN), the first attempt to attack the energy consumption of an AdNN.

A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks

This paper proposes a simple yet effective method for detecting any abnormal samples, which is applicable to any pre-trained softmax neural classifier, and obtains the class conditional Gaussian distributions with respect to (low- and upper-level) features of the deep models under Gaussian discriminant analysis.

DCGANs for image super-resolution , denoising and debluring

Deep convolutional generative adversarial networks (DCGAN) is used to do various image processing tasks such as super-resolution, denoising and deconvolution and shows slightly lower PSNR compared to traditional methods.

NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image Caption Generation Models

Experimental re-sults show that NICGSlowDown can generate images with human-unnoticeable perturbations that will increase the NICG model latency up to 483.86%, which could raise the community's concern about the efficiency robustness of NICG models.

Intriguing properties of neural networks

It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.

Deep Residual Learning for Image Recognition

This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.