• Corpus ID: 246473222

Realizable Universal Adversarial Perturbations for Malware

  title={Realizable Universal Adversarial Perturbations for Malware},
  author={Raphael Labaca-Castro and Luis Mu{\~n}oz-Gonz{\'a}lez and Feargus Pendlebury and Gabi Dreo Rodosek and Fabio Pierazzi and Lorenzo Cavallaro},
Machine learning classifiers are vulnerable to adversarial examples—input-specific perturbations that manipulate models’ output. Universal Adversarial Perturbations (UAPs), which identify noisy patterns that generalize across the input space, allow the attacker to greatly scale up the generation of such examples. Although UAPs have been explored in application domains beyond computer vision, little is known about their properties and implications in the specific context of realizable attacks… 


Universal Adversarial Perturbations for Malware
While adversarial training in the feature space must deal with large and often unconstrained regions, UAPs in the problem space identify specific vulnerabilities that allow us to harden a classifier more effectively, shifting the challenges and associated cost of identifying new universal adversarial transformations back to the attacker.
Adversarial Deep Learning for Robust Detection of Binary Encoded Malware
Methods capable of generating functionally preserved adversarial malware examples in the binary domain are introduced using the saddle-point formulation to incorporate the adversarial examples into the training of models that are robust to them.
AIMED-RL: Exploring Adversarial Malware Examples with Reinforcement Learning
AIMED-RL, Automatic Intelligent Malware modifications to Evade Detection using Reinforcement Learning is presented, able to generate adversarial examples that lead machine learning models to misclassify malware files, without compromising their functionality.
NAG: Network for Adversary Generation
Perturbations crafted by the proposed generative approach to model the distribution of adversarial perturbations achieve state-of-the-art fooling rates, exhibit wide variety and deliver excellent cross model generalizability.
Deep Reinforcement Adversarial Learning Against Botnet Evasion Attacks
This work proposes the first framework that can protect botnet detectors from adversarial attacks through deep reinforcement learning mechanisms and paves the way to novel and more robust cybersecurity detectors based on machine learning applied to network traffic analytics.
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks
This paper introduces a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions, and unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset.
Towards Deep Learning Models Resistant to Adversarial Attacks
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.
Adversarial Machine Learning at Scale
This research applies adversarial training to ImageNet and finds that single-step attacks are the best for mounting black-box attacks, and resolution of a "label leaking" effect that causes adversarially trained models to perform better on adversarial examples than on clean examples.
Defending Against Universal Perturbations With Shared Adversarial Training
This work shows that adversarial training is more effective in preventing universal perturbations, where the same perturbation needs to fool a classifier on many inputs, and investigates the trade-off between robustness against universal perturbed data and performance on unperturbed data.
Adversarial Malware Binaries: Evading Deep Learning for Malware Detection in Executables
This work proposes a gradient-based attack that is capable of evading a recently-proposed deep network suited to this purpose by only changing few specific bytes at the end of each mal ware sample, while preserving its intrusive functionality.