Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
- Luis Muñoz-González, B. Biggio, F. Roli
- Computer ScienceAISec@CCS
- 29 August 2017
This work proposes a novel poisoning algorithm based on the idea of back-gradient optimization, able to target a wider class of learning algorithms, trained with gradient-based procedures, including neural networks and deep learning architectures, and empirically evaluates its effectiveness on several application examples.
Automated Dynamic Analysis of Ransomware: Benefits, Limitations and use for Detection
- D. Sgandurra, Luis Muñoz-González, Rabih Mohsen, Emil C. Lupu
- Computer ScienceArXiv
- 10 September 2016
EldeRan, a machine learning approach for dynamically analysing and classifying ransomware, is presented, suggesting that dynamic analysis can support ransomware detection, since ransomware samples exhibit a set of characteristic features at run-time that are common across families, and that helps the early detection of new variants.
Label Sanitization against Label Flipping Poisoning Attacks
- Andrea Paudice, Luis Muñoz-González, Emil C. Lupu
- Computer ScienceNemesis/UrbReas/SoGood/IWAISe/GDM@PKDD/ECML
- 2 March 2018
This paper proposes an efficient algorithm to perform optimal label flipping poisoning attacks and a mechanism to detect and relabel suspicious data points, mitigating the effect of such poisoning attacks.
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection
- Andrea Paudice, Luis Muñoz-González, A. György, Emil C. Lupu
- Computer ScienceArXiv
- 8 February 2018
This paper proposes a defence mechanism to mitigate the effect of these optimal poisoning attacks based on outlier detection, and shows empirically that the adversarial examples generated by these attack strategies are quite different from genuine points, as no detectability constrains are considered to craft the attack.
Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging
- Luis Muñoz-González, Kenneth T. Co, Emil C. Lupu
- Computer ScienceArXiv
- 11 September 2019
This paper introduces Adaptive Federated Averaging, a novel algorithm for robust federated learning that is designed to detect failures, attacks, and bad updates provided by participants in a collaborative model, and proposes a Hidden Markov Model to model and learn the quality of model Updates provided by each participant during training.
Exact Inference Techniques for the Analysis of Bayesian Attack Graphs
- Luis Muñoz-González, D. Sgandurra, Martín Barrère, Emil C. Lupu
- Computer ScienceIEEE Transactions on Dependable and Secure…
- 8 October 2015
An extensive experimental evaluation on synthetic Bayesian attack graphs with different topologies is performed, showing the computational advantages in terms of time and memory use of the proposed techniques when compared to existing approaches.
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks
- Kenneth T. Co, Luis Muñoz-González, Emil C. Lupu
- Computer ScienceConference on Computer and Communications…
- 30 September 2018
This paper introduces a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions, and unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset.
Poisoning Attacks with Generative Adversarial Nets
- Luis Muñoz-González, Bjarne Pfitzner, Matteo Russo, Javier Carnerero-Cano, Emil C. Lupu
- Computer ScienceArXiv
- 18 June 2019
A novel generative model is introduced to craft systematic poisoning attacks against machine learning classifiers generating adversarial training examples, i.e. samples that look like genuine data points but that degrade the classifier's accuracy when used for training.
Universal Adversarial Perturbations for Malware
- Raphael Labaca-Castro, Luis Muñoz-González, Feargus Pendlebury, Gabi Dreo Rodosek, Fabio Pierazzi, L. Cavallaro
- Computer ScienceArXiv
- 2021
While adversarial training in the feature space must deal with large and often unconstrained regions, UAPs in the problem space identify specific vulnerabilities that allow us to harden a classifier more effectively, shifting the challenges and associated cost of identifying new universal adversarial transformations back to the attacker.
Real-time Detection of Practical Universal Adversarial Perturbations
- Kenneth T. Co, Luis Muñoz-González, Leslie Kanthan, Emil C. Lupu
- Computer ScienceArXiv
- 16 May 2021
HyperNeuron is able to simultaneously detect both adversarial mask and patch UAPs with comparable or better performance than existing UAP defenses whilst introducing a significantly reduced latency of only 0.86 milliseconds per image, suggesting that many realistic and practical universal attacks can be reliably mitigated in real-time, which shows promise for the robust deployment of machine learning systems.
...
...