Corpus ID: 211677512

Exploring Backdoor Poisoning Attacks Against Malware Classifiers

@article{Severi2020ExploringBP,
  title={Exploring Backdoor Poisoning Attacks Against Malware Classifiers},
  author={Giorgio Severi and Jim Meyer and Scott E. Coull and Alina Oprea},
  journal={ArXiv},
  year={2020},
  volume={abs/2003.01031}
}
  • Giorgio Severi, Jim Meyer, +1 author Alina Oprea
  • Published 2020
  • Computer Science
  • ArXiv
  • Current training pipelines for machine learning (ML) based malware classification rely on crowdsourced threat feeds, exposing a natural attack injection point. We study for the first time the susceptibility of ML malware classifiers to backdoor poisoning attacks, specifically focusing on challenging "clean label" attacks where attackers do not control the sample labeling process. We propose the use of techniques from explainable machine learning to guide the selection of relevant features and… CONTINUE READING

    Citations

    Publications citing this paper.
    SHOWING 1-3 OF 3 CITATIONS

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 50 REFERENCES

    Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables

    VIEW 1 EXCERPT

    Exploring Adversarial Examples in Malware Detection

    VIEW 2 EXCERPTS

    Evasion Attacks against Machine Learning at Test Time

    VIEW 3 EXCERPTS

    Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

    VIEW 2 EXCERPTS

    Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Is Feature Selection Secure against Training Data Poisoning?

    VIEW 1 EXCERPT