Corpus ID: 211572896

Applying Tensor Decomposition to image for Robustness against Adversarial Attack

@article{Cho2020ApplyingTD,
  title={Applying Tensor Decomposition to image for Robustness against Adversarial Attack},
  author={S. Cho and T. Jun and Mingu Kang and Daeyoung Kim},
  journal={ArXiv},
  year={2020},
  volume={abs/2002.12913}
}
  • S. Cho, T. Jun, +1 author Daeyoung Kim
  • Published 2020
  • Computer Science
  • ArXiv
  • Nowadays the deep learning technology is growing faster and shows dramatic performance in computer vision areas. However, it turns out a deep learning based model is highly vulnerable to some small perturbation called an adversarial attack. It can easily fool the deep learning model by adding small perturbations. On the other hand, tensor decomposition method widely uses for compressing the tensor data, including data matrix, image, etc. In this paper, we suggest combining tensor decomposition… CONTINUE READING

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 34 REFERENCES
    Feature Denoising for Improving Adversarial Robustness
    • 237
    • PDF
    Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
    • 469
    • PDF
    PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples
    • 328
    • PDF
    EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
    • 240
    • PDF
    Robust Physical-World Attacks on Deep Learning Visual Classification
    • 385
    • PDF
    Ensemble Adversarial Training: Attacks and Defenses
    • 971
    • PDF
    DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
    • 1,731
    • PDF
    ComDefend: An Efficient Image Compression Model to Defend Adversarial Examples
    • 41
    • Highly Influential
    • PDF
    Defense Against Adversarial Attacks Using High-Level Representation Guided Denoiser
    • 261
    • Highly Influential
    • PDF
    Explaining and Harnessing Adversarial Examples
    • 5,467
    • Highly Influential
    • PDF