Corpus ID: 52185612

On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks

@article{Demontis2018OnTI,
  title={On the Intriguing Connections of Regularization, Input Gradients and Transferability of Evasion and Poisoning Attacks},
  author={A. Demontis and Marco Melis and Maura Pintor and M. Jagielski and B. Biggio and Alina Oprea and C. Nita-Rotaru and F. Roli},
  journal={ArXiv},
  year={2018},
  volume={abs/1809.02861}
}
  • A. Demontis, Marco Melis, +5 authors F. Roli
  • Published 2018
  • Computer Science, Mathematics
  • ArXiv
  • Transferability captures the ability of an attack against a machinelearning model to be effective against a different, potentially unknown, model. Studying transferability of attacks has gained interest in the last years due to deployment of cyber-attack detection services based on machine learning. For these applications of machine learning, service providers avoid disclosing information about their machine-learning algorithms. As a result, attackers trying to bypass detection are forced to… CONTINUE READING
    5 Citations
    Enhancing Deep Neural Networks Against Adversarial Malware Examples
    • 2
    • PDF
    SoK: Arms Race in Adversarial Malware Detection
    • 1
    • Highly Influenced
    • PDF

    References

    SHOWING 1-10 OF 62 REFERENCES
    When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
    • 77
    • PDF
    Evasion Attacks against Machine Learning at Test Time
    • 887
    • PDF
    Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
    • 181
    • PDF
    Evading Classifiers by Morphing in the Dark
    • 76
    • PDF
    Enhancing the Transferability of Adversarial Examples with Noise Reduced Gradient
    • 4
    • Highly Influential
    Secure Kernel Machines against Evasion Attacks
    • 47
    Discovering Adversarial Examples with Momentum
    • 33
    • Highly Influential
    • PDF
    Delving into Transferable Adversarial Examples and Black-box Attacks
    • 726
    • Highly Influential
    • PDF
    Practical Black-Box Attacks against Machine Learning
    • 1,430
    • Highly Influential
    • PDF