Corpus ID: 52185612

On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks

@article{Demontis2018OnTI,
  title={On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks},
  author={A. Demontis and Marco Melis and Maura Pintor and Matthew Jagielski and B. Biggio and Alina Oprea and C. Nita-Rotaru and F. Roli},
  journal={ArXiv},
  year={2018},
  volume={abs/1809.02861}
}
  • A. Demontis, Marco Melis, +5 authors F. Roli
  • Published 2018
  • Computer Science, Mathematics
  • ArXiv
  • Transferability captures the ability of an attack against a machinelearning model to be effective against a different, potentially unknown, model. Studying transferability of attacks has gained interest in the last years due to deployment of cyber-attack detection services based on machine learning. For these applications of machine learning, service providers avoid disclosing information about their machine-learning algorithms. As a result, attackers trying to bypass detection are forced to… CONTINUE READING

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 63 REFERENCES
    Evasion Attacks against Machine Learning at Test Time
    • 844
    • PDF
    Evading Classifiers by Morphing in the Dark
    • 74
    • PDF
    Enhancing the Transferability of Adversarial Examples with Noise Reduced Gradient
    • 4
    • Highly Influential
    Secure Kernel Machines against Evasion Attacks
    • 45
    Discovering Adversarial Examples with Momentum
    • 29
    • Highly Influential
    • PDF
    Delving into Transferable Adversarial Examples and Black-box Attacks
    • 683
    • Highly Influential
    • PDF
    Practical Black-Box Attacks against Machine Learning
    • 1,346
    • Highly Influential
    • PDF