Corpus ID: 235485217

Light Lies: Optical Adversarial Attack

@article{Kim2021LightLO,
  title={Light Lies: Optical Adversarial Attack},
  author={Kyu-Lim Kim and Jeong-Soo Kim and Seung-Ri Song and Jun-Ho Choi and Chul-Min Joo and Jong-Seok Lee},
  journal={ArXiv},
  year={2021},
  volume={abs/2106.09908}
}
A significant amount of work has been done on adversarial attacks that inject imperceptible noise to images to deteriorate the image classification performance of deep models. However, most of the existing studies consider attacks in the digital (pixel) domain where an image acquired by an image sensor with sampling and quantization has been recorded. This paper, for the first time, introduces an optical adversarial attack, which physically alters the light field information arriving at the… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 39 REFERENCES
Adversarial camera stickers: A physical camera-based attack on deep learning systems
TLDR
This work shows that by placing a carefully crafted and mainly-translucent sticker over the lens of a camera, one can create universal perturbations of the observed images that are inconspicuous, yet misclassify target objects as a different (targeted) class. Expand
Robust Physical-World Attacks on Deep Learning Visual Classification
TLDR
This work proposes a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions and shows that adversarial examples generated using RP2 achieve high targeted misclassification rates against standard-architecture road sign classifiers in the physical world under various environmental conditions, including viewpoints. Expand
Adversarial examples in the physical world
TLDR
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples. Expand
Compressive imaging for defending deep neural networks from adversarial attacks.
TLDR
This Letter proposes to employ compressive sensing to defend DNNs from adversarial attacks, and at the same time to encode the image, thus preventing counterattacks and presenting computer simulations and optical experimental results of object classification in adversarial images captured with a CS single pixel camera. Expand
Is Robustness the Cost of Accuracy? - A Comprehensive Study on the Robustness of 18 Deep Image Classification Models
TLDR
This paper thoroughly benchmark 18 ImageNet models using multiple robustness metrics, including the distortion, success rate and transferability of adversarial examples between 306 pairs of models, and reveals several new insights. Expand
Synthesizing Robust Adversarial Examples
TLDR
The existence of robust 3D adversarial objects is demonstrated, and the first algorithm for synthesizing examples that are adversarial over a chosen distribution of transformations is presented, which synthesizes two-dimensional adversarial images that are robust to noise, distortion, and affine transformation. Expand
Towards Evaluating the Robustness of Neural Networks
TLDR
It is demonstrated that defensive distillation does not significantly increase the robustness of neural networks, and three new attack algorithms are introduced that are successful on both distilled and undistilled neural networks with 100% probability are introduced. Expand
Adversarial Machine Learning at Scale
TLDR
This research applies adversarial training to ImageNet and finds that single-step attacks are the best for mounting black-box attacks, and resolution of a "label leaking" effect that causes adversarially trained models to perform better on adversarial examples than on clean examples. Expand
Deep Learning Microscopy
TLDR
It is demonstrated that a deep neural network can significantly improve optical microscopy, enhancing its spatial resolution over a large field-of-view and depth of field, and can be used to design computational imagers that get better and better as they continue to image specimen and establish new transformations among different modes of imaging. Expand
Very Deep Convolutional Networks for Large-Scale Image Recognition
TLDR
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. Expand
...
1
2
3
4
...