• Corpus ID: 235187345

OFEI: A Semi-black-box Android Adversarial Sample Attack Framework Against DLaaS

@article{Xu2021OFEIAS,
  title={OFEI: A Semi-black-box Android Adversarial Sample Attack Framework Against DLaaS},
  author={Guangquan Xu and Guohua Xin and Litao Jiao and Jian Liu and Shaoying Liu and Meiqi Feng and Xi Zheng},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.11593}
}
With the growing popularity of Android devices, Android malware is seriously threatening the safety of users. Although such threats can be detected by deep learning as a service (DLaaS), deep neural networks as the weakest part of DLaaS are often deceived by the adversarial samples elaborated by attackers. In this paper, we propose a new semi-black-box attack framework called one-feature-each-iteration (OFEI) to craft Android adversarial samples. This framework modifies as few features as… 
EvadeDroid: A Practical Evasion Attack on Machine Learning for Black-box Android Malware Detection
TLDR
A practical evasion attack, EvadeDroid, to circumvent black-box Android malware detectors, and is able to preserve its stealthiness against five popular commercial antivirus, thus demonstrating its feasibility in the real world.
FNet: A Two-Stream Model for Detecting Adversarial Attacks against 5G-Based Deep Learning Services
TLDR
A new two-stream network which includes RGB stream and spatial rich model (SRM) noise stream is proposed which can accurately detect adversarial examples and has good transferability to generalize to other adversaries.
Do You Think You Can Hold Me? The Real Challenge of Problem-Space Evasion Attacks
TLDR
The examination of the robustification efforts of ML-based malware detection systems, using problem-space and feature-space evasion attacks, is presented and suggests that the practical usefulness of such techniques cannot be overlooked.

References

SHOWING 1-10 OF 34 REFERENCES
Transferability of Adversarial Examples to Attack Cloud-based Image Classifier Service
TLDR
A novel attack method, Fast Featuremap Loss PGD (FFL-PGD) attack based on Substitution model, which achieves a high bypass rate with a very limited number of queries, and the first attempt to conduct an extensive empirical study of black-box attacks against real-world cloud-based classification services.
A Data Augmentation-based Defense Method Against Adversarial Attacks in Neural Networks
TLDR
This paper focuses on developing a lightweight defense method that can efficiently invalidate full whitebox adversarial attacks with the compatibility of real-life constraints and demonstrates outstanding robustness and efficiency.
Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection
  • Deqiang Li, Qianmu Li
  • Computer Science
    IEEE Transactions on Information Forensics and Security
  • 2020
TLDR
This work proposes a new attack approach, named mixture of attacks, by rendering attackers capable of multiple generative methods and multiple manipulation sets, to perturb a malware example without ruining its malicious functionality.
Adversarial Attacks Against Network Intrusion Detection in IoT Systems
TLDR
This article designs a novel adversarial attack against DL-based network intrusion detection systems (NIDSs) in the Internet-of-Things environment, with only black-box accesses to the DL model in such NIDS.
Query-Efficient Black-box Adversarial Examples (superceded)
TLDR
A new method for reliably generating adversarial examples under more restricted, practical black-box threat models and a new algorithm to perform targeted adversarial attacks in the partial-information setting, where the attacker only has access to a limited number of target classes.
GenAttack: practical black-box attacks with gradient-free optimization
TLDR
GenAttack is introduced, a gradient-free optimization technique that uses genetic algorithms for synthesizing adversarial examples in the black-box setting and can successfully attack some state-of-the-art ImageNet defenses, including ensemble adversarial training and non-differentiable or randomized input transformations.
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks
TLDR
The study shows that defensive distillation can reduce effectiveness of sample creation from 95% to less than 0.5% on a studied DNN, and analytically investigates the generalizability and robustness properties granted by the use of defensive Distillation when training DNNs.
Adversarial Deep Learning for Robust Detection of Binary Encoded Malware
TLDR
Methods capable of generating functionally preserved adversarial malware examples in the binary domain are introduced using the saddle-point formulation to incorporate the adversarial examples into the training of models that are robust to them.
Adversarial Perturbations Against Deep Neural Networks for Malware Classification
TLDR
This paper shows how to construct highly-effective adversarial sample crafting attacks for neural networks used as malware classifiers, and evaluates to which extent potential defensive mechanisms against adversarial crafting can be leveraged to the setting of malware classification.
ABSTRACT: Cloud-based Image Classification Service is Not Robust to Affine Transformation: A Forgotten Battlefield
TLDR
This paper makes the first attempt to conduct an extensive empirical study of Affine Transformation (AT) attacks against main stream real-world cloud-based classification services and proposes two defense algorithms to address these security challenges.
...
1
2
3
4
...