Corpus ID: 213176396

Backdooring and Poisoning Neural Networks with Image-Scaling Attacks

@article{Quiring2020BackdooringAP,
  title={Backdooring and Poisoning Neural Networks with Image-Scaling Attacks},
  author={Erwin Quiring and Konrad Rieck},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.08633}
}
  • Erwin Quiring, Konrad Rieck
  • Published 2020
  • Computer Science
  • ArXiv
  • Backdoors and poisoning attacks are a major threat to the security of machine-learning and vision systems. Often, however, these attacks leave visible artifacts in the images that can be visually detected and weaken the efficacy of the attacks. In this paper, we propose a novel strategy for hiding backdoor and poisoning attacks. Our approach builds on a recent class of attacks against image scaling. These attacks enable manipulating images such that they change their content when scaled to a… CONTINUE READING

    Figures and Topics from this paper.

    Explore key concepts

    Links to highly relevant papers for key concepts in this paper:

    Citations

    Publications citing this paper.
    SHOWING 1-2 OF 2 CITATIONS

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 21 REFERENCES

    Seeing is Not Believing: Camouflage Attacks on Image Scaling Algorithms

    VIEW 3 EXCERPTS
    HIGHLY INFLUENTIAL

    Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks

    VIEW 1 EXCERPT

    Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    Bypassing Backdoor Detection Algorithms in Deep Learning

    VIEW 1 EXCERPT

    Latent Backdoor Attacks on Deep Neural Networks

    Towards Evaluating the Robustness of Neural Networks

    VIEW 1 EXCERPT

    Trojaning Attack on Neural Networks

    VIEW 1 EXCERPT