StratDef: Strategic Defense Against Adversarial Attacks in ML-based Malware Detection
@inproceedings{Rashid2022StratDefSD, title={StratDef: Strategic Defense Against Adversarial Attacks in ML-based Malware Detection}, author={Aqib Rashid and Jose M. Such}, year={2022} }
—Over the years, most research towards defenses against adversarial attacks on machine learning models has been in the image recognition domain. The malware detection domain has received less attention despite its importance. Moreover, most work exploring these defenses has focused on several methods but with no strategy when applying them. In this paper, we introduce StratDef, which is a strategic defense system based on a moving target defense approach. We overcome challenges related to the…
Figures and Tables from this paper
References
SHOWING 1-10 OF 96 REFERENCES
Attack and Defense of Dynamic Analysis-Based, Adversarial Neural Malware Classification Models
- Computer Science
- 2017
It is demonstrated that while adding additional hidden layers to neural models does not significantly improve the malware classification accuracy, it does significantly increase the classifier's robustness to adversarial attacks.
A moving target defense against adversarial machine learning
- Computer ScienceSEC
- 2019
This work shows, that in addition to switching among algorithms, one can think of introducing randomness in tuning parameters, and model choices to achieve better defense against adversarial machine learning.
A Framework for Enhancing Deep Neural Networks Against Adversarial Malware
- Computer ScienceIEEE Transactions on Network Science and Engineering
- 2021
A defense framework to enhance the robustness of deep neural networks against adversarial malware evasion attacks is proposed and wins the AICS'2019 challenge by achieving a 76.02% accuracy, where neither the attacker nor the defender knows the framework or defense nor the attacks.
Universal Adversarial Perturbations for Malware
- Computer ScienceArXiv
- 2021
While adversarial training in the feature space must deal with large and often unconstrained regions, UAPs in the problem space identify specific vulnerabilities that allow us to harden a classifier more effectively, shifting the challenges and associated cost of identifying new universal adversarial transformations back to the attacker.
MTDeep: Boosting the Security of Deep Neural Nets Against Adversarial Attacks with Moving Target Defense
- Computer ScienceAAAI Workshops
- 2018
This paper draws inspiration from the fields of cybersecurity and multi-agent systems and proposes to leverage the concept of Moving Target Defense in designing a meta-defense for 'boosting' the robustness of an ensemble of deep neural networks for visual classification tasks against such adversarial attacks.
When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
- Computer ScienceUSENIX Security Symposium
- 2018
StingRay is designed, a targeted poisoning attack that is broadly applicable---it is practical against 4 machine learning applications, which use 3 different learning algorithms, and it can bypass 2 existing defenses.
Enhancing Deep Neural Networks Against Adversarial Malware Examples
- Computer ScienceArXiv
- 2020
Inspired by the AICS'2019 Challenge organized by the MIT Lincoln Lab, a number of principles for enhancing the robustness of neural networks against adversarial malware evasion attacks are systematized.
Binary Black-box Evasion Attacks Against Deep Learning-based Static Malware Detectors with Adversarial Byte-Level Language Model
- Computer ScienceArXiv
- 2020
This work proposes MalRNN, a novel deep learning-based approach to automatically generate evasive malware variants without any of these restrictions, which effectively evades three recent deeplearning-based malware detectors and outperforms current benchmark methods.
Automated poisoning attacks and defenses in malware detection systems: An adversarial machine learning approach
- Computer ScienceComput. Secur.
- 2018
Exploring Adversarial Examples in Malware Detection
- Computer Science2019 IEEE Security and Privacy Workshops (SPW)
- 2019
By training an existing model on a production-scale dataset, it is shown that some previous attacks are less effective than initially reported, while simultaneously highlighting architectural weaknesses that facilitate new attack strategies for malware classification.