Corpus ID: 209439648

secml: A Python Library for Secure and Explainable Machine Learning

@article{Melis2019secmlAP,
  title={secml: A Python Library for Secure and Explainable Machine Learning},
  author={Marco Melis and A. Demontis and Maura Pintor and Angelo Sotgiu and B. Biggio},
  journal={ArXiv},
  year={2019},
  volume={abs/1912.10013}
}
  • Marco Melis, A. Demontis, +2 authors B. Biggio
  • Published 2019
  • Computer Science, Mathematics
  • ArXiv
  • We present secml, an open-source Python library for secure and explainable machine learning. It implements the most popular attacks against machine learning, including not only test-time evasion attacks to generate adversarial examples against deep neural networks, but also training-time poisoning attacks against support vector machines and many other algorithms. These attacks enable evaluating the security of learning algorithms and of the corresponding defenses under both white-box and black… CONTINUE READING
    10 Citations
    Poisoning Attacks on Algorithmic Fairness
    • 4
    • PDF
    Adversarial Detection of Flash Malware: Limitations and Open Issues
    • 6
    • PDF
    Machine Learning (In) Security: A Stream of Problems
    Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?
    • 1
    • PDF
    Deep neural rejection against adversarial examples
    • 5
    • PDF
    FADER: Fast Adversarial Example Rejection
    Adversarial Machine Learning - Industry Perspectives
    • 11
    • PDF
    RobustBench: a standardized adversarial robustness benchmark

    References

    SHOWING 1-10 OF 16 REFERENCES
    Evasion Attacks against Machine Learning at Test Time
    • 886
    • PDF
    Adversarial Robustness Toolbox v1.0.0
    • 40
    • Highly Influential
    Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
    • 350
    Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
    • 57
    • PDF
    Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models
    • 230
    • PDF
    Foolbox: A Python toolbox to benchmark the robustness of machine learning models
    • 150
    • Highly Influential
    Towards Evaluating the Robustness of Neural Networks
    • 2,708
    • PDF
    Poisoning Attacks against Support Vector Machines
    • 613
    • PDF
    The Limitations of Deep Learning in Adversarial Settings
    • 1,722
    • PDF
    Technical Report on the CleverHans v2.1.0 Adversarial Examples Library
    • 231
    • Highly Influential