• Corpus ID: 233423300

AdvHaze: Adversarial Haze Attack

@article{Gao2021AdvHazeAH,
  title={AdvHaze: Adversarial Haze Attack},
  author={Ruijun Gao and Qing Guo and Felix Juefei-Xu and Hongkai Yu and Wei Feng},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.13673}
}
In recent years, adversarial attacks have drawn more attention for their value on evaluating and improving the robustness of machine learning models, especially, neural network models. However, previous attack methods have mainly focused on applying some l normbounded noise perturbations. In this paper, we instead introduce a novel adversarial attack method based on haze, which is a common phenomenon in real-world scenery. Our method can synthesize potentially adversarial haze into an image… 

Figures and Tables from this paper

Towards Robust Rain Removal Against Adversarial Attacks: A Comprehensive Benchmark Analysis and Beyond

This study shows that, when the image/video is highly degraded, rain removal methods are more vulnerable to the adversarial attacks as small distortions/perturbations become less noticeable or detectable.

Pasadena: Perceptually Aware and Stealthy Adversarial Denoise Attack

This work identifies a new task that stealthily embeds attacks inside the image denoising module widely deployed in multimedia devices as an image post-processing operation to simultaneously enhance the visual image quality and fool DNNs.

AVA: Adversarial Vignetting Attack against Visual Recognition

This work proposes radial-anisotropic adversarial vignetting attack (RI-AVA), and proposes the geometry-aware level-set optimization method to solve the adversarialvignetting regions and physical parameters jointly.

Adversarial Relighting against Face Recognition

The extensive and insightful results demonstrate the work can generate realistic adversarial relighted face images fooling face recognition tasks easily, revealing the threat of specific light directions and strengths.

Scale-free Photo-realistic Adversarial Pattern Attack

This paper proposes a scale-free generation-based attack algorithm that synthesize semantically meaningful adversarial patterns globally to images with arbitrary scales that outperforms the state-of-the-art methods on a wide range of attack settings.

Scale-free and Task-agnostic Attack: Generating Photo-realistic Adversarial Patterns with Patch Quilting Generator

A novel Patch Quilting Generative Adversarial Networks (PQ-GAN) is proposed to learn the first scale-free CNN generator that can be applied to attack images with arbitrary scales for various computer vision tasks.

Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection

  • Ruijun GaoQing Guo Song Wang
  • Computer Science
    2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2022
The very first blackbox joint adversarial exposure and noise attack (Jadena), where they jointly and locally tune the exposure and additive perturbations of the image according to a newly designed high-feature-level contrast-sensitive loss function, leads to significant performance degradation on various co- saliency detection datasets and makes the co-salient objects undetectable.

AdvBokeh: Learning to Adversarially Defocus Blur

A Depth-guided Bokeh Synthesis Network (DebsNet) that is able to flexibly synthesis, refocus, and adjust the level of bokeh of the image, with a one-stage training procedure and a depth-guided gradient-based attack to regularize the gradient to improve the realisticity of the adversarial Bokeh.

Research Landscape on Robust Perception

My research in general is focused on a fuller understanding of deep learning where I am actively exploring new methods in deep learning that are statistically efficient and adversarially robust and under what conditions deep learning starts to fail.

Benchmarking Shadow Removal for Facial Landmark Detection and Beyond

A novel detection-aware shadow removal framework is designed, which empowers shadow removal to achieve higher restoration quality and enhance the shadow robustness of deployed facial landmark detectors.

References

SHOWING 1-10 OF 30 REFERENCES

Watch out! Motion is Blurring the Vision of Your Deep Neural Networks

A novel adversarial attack method that can generate visually natural motion-blurred adversarial examples, named motion-based adversarial blur attack (ABBA), which shows more effective penetrating capability to the state-of-the-art GAN-based deblurring mechanisms compared with other blurring methods.

Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks

A translation-invariant attack method to generate more transferable adversarial examples against the defense models, which fools eight state-of-the-art defenses at an 82% success rate on average based only on the transferability, demonstrating the insecurity of the current defense techniques.

It's Raining Cats or Dogs? Adversarial Rain Attack on DNN Perception

A factor-aware rain generation that simulates rain steaks according to the camera exposure process and models the learnable rain factors for adversarial attack and the adversarial rain attack against the image classification and object detection is proposed.

Explaining and Harnessing Adversarial Examples

It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.

Image-to-Image Translation with Conditional Adversarial Networks

Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

Adv-watermark: A Novel Watermark Perturbation for Adversarial Examples

A novel watermark perturbation for adversarial examples (Adv-watermark) which combines image watermarking techniques and adversarial example algorithms and outperforms the state-of-the-art attack methods.

Adversarial examples in the physical world

It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.

StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation

A unified model architecture of StarGAN allows simultaneous training of multiple datasets with different domains within a single network, which leads to StarGAN's superior quality of translated images compared to existing models as well as the novel capability of flexibly translating an input image to any desired target domain.

Towards Robust Monocular Depth Estimation: Mixing Datasets for Zero-Shot Cross-Dataset Transfer

This work proposes a robust training objective that is invariant to changes in depth range and scale, advocate the use of principled multi-objective learning to combine data from different sources, and highlights the importance of pretraining encoders on auxiliary tasks.

Level-aware Haze Image Synthesis by Self-Supervised Content-Style Disentanglement

This paper proposes a self-supervised style regression via stochastic linear interpolation to reduce the content information in style feature and demonstrates the disentangling completeness and its superiority in level-aware haze image synthesis.