Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A Causal Language Model Approach

@article{Hu2021SingleShotBA,
  title={Single-Shot Black-Box Adversarial Attacks Against Malware Detectors: A Causal Language Model Approach},
  author={James Lee Hu and Mohammadreza Ebrahimi and Hsinchun Chen},
  journal={2021 IEEE International Conference on Intelligence and Security Informatics (ISI)},
  year={2021},
  pages={1-6}
}
Deep Learning (DL)-based malware detectors are increasingly adopted for early detection of malicious behavior in cybersecurity. However, their sensitivity to adversarial malware variants has raised immense security concerns. Generating such adversarial variants by the defender is crucial to improving the resistance of DL-based malware detectors against them. This necessity has given rise to an emerging stream of machine learning research, Adversarial Malware example Generation (AMG), which aims… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 28 REFERENCES
Binary Black-box Evasion Attacks Against Deep Learning-based Static Malware Detectors with Adversarial Byte-Level Language Model
TLDR
This work proposes MalRNN, a novel deep learning-based approach to automatically generate evasive malware variants without any of these restrictions, which effectively evades three recent deeplearning-based malware detectors and outperforms current benchmark methods.
Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables
TLDR
This work proposes a gradient-based attack that is capable of evading a recently-proposed deep network suited to this purpose by only changing few specific bytes at the end of each mal ware sample, while preserving its intrusive functionality.
Functionality-Preserving Black-Box Optimization of Adversarial Windows Malware
TLDR
This paper presents a novel family of black-box attacks that are both query-efficient and functionality-preserving, as they rely on the injection of benign content either at the end of the malicious file, or within some newly-created sections.
Exploring Adversarial Examples in Malware Detection
TLDR
By training an existing model on a production-scale dataset, it is shown that some previous attacks are less effective than initially reported, while simultaneously highlighting architectural weaknesses that facilitate new attack strategies for malware classification.
Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries
TLDR
This work finds that a recently-proposed convolutional neural network does not learn any meaningful characteristic for malware detection from the data and text sections of executable files, but rather tends to learn to discriminate between benign and malware samples based on the characteristics found in the file header.
Generation & Evaluation of Adversarial Examples for Malware Obfuscation
TLDR
A generative model for executable adversarial malware examples using obfuscation that achieves a high misclassification rate, up to 100% and 98% in white-box and black-box settings respectively, and demonstrates transferability is presented.
ARMED: How Automatic Malware Modifications Can Evade Static Detection?
TLDR
It is proved that only six perturbations are required to create new functional malware samples exhibiting exactly the same behavior yet with up to 80% less detections based on original malware that was previously detected.
Black-Box Attacks against RNN based Malware Detection Algorithms
TLDR
Experimental results showed that RNN based malware detection algorithms fail to detect most of the generated malicious adversarial examples, which means the proposed model is able to effectively bypass the detection algorithms.
Adversarial Examples for CNN-Based Malware Detectors
TLDR
This paper proposes two novel white-box methods and one novel black-box method to attack a recently proposed malware detector and proposes a pre-detection mechanism to reject adversarial examples to improve the safety and efficiency of malware detection.
Poster: Attacking Malware Classifiers by Crafting Gradient-Attacks that Preserve Functionality
TLDR
Initial results demonstrate that the gradient-based approach presented is able to automatically find optimal adversarial examples in a more efficient way, which can provide a good support for building more robust models in the future.
...
...