Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid
@article{Melis2017IsDL, title={Is Deep Learning Safe for Robot Vision? Adversarial Examples Against the iCub Humanoid}, author={Marco Melis and Ambra Demontis and Battista Biggio and Gavin Brown and Giorgio Fumera and Fabio Roli}, journal={2017 IEEE International Conference on Computer Vision Workshops (ICCVW)}, year={2017}, pages={751-759} }
Deep neural networks have been widely adopted in recent years, exhibiting impressive performances in several application domains. It has however been shown that they can be fooled by adversarial examples, i.e., images altered by a barely-perceivable adversarial noise, carefully crafted to mislead classification. In this work, we aim to evaluate the extent to which robot-vision systems embodying deep-learning algorithms are vulnerable to adversarial examples, and propose a computationally…
84 Citations
Feature-Guided Black-Box Safety Testing of Deep Neural Networks
- 2018
Computer Science
TACAS
A feature-guided black-box approach to test the safety of deep neural networks that requires no knowledge of the network at hand and can be used to evaluate robustness of neural networks in safety-critical applications such as traffic sign recognition in self-driving cars.
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
- 2018
Computer Science
IEEE Access
This paper presents the first comprehensive survey on adversarial attacks on deep learning in computer vision, reviewing the works that design adversarial attack, analyze the existence of such attacks and propose defenses against them.
Deep neural rejection against adversarial examples
- 2020
Computer Science
EURASIP J. Inf. Secur.
This work proposes a deep neural rejection mechanism to detect adversarial examples, based on the idea of rejecting samples that exhibit anomalous feature representations at different network layers, and empirically shows that this approach outperforms previously proposed methods that detect adversarian examples by only analyzing the feature representation provided by the output network layer.
Model-Based Robust Deep Learning
- 2020
Computer Science
ArXiv
The objective is to provide general training algorithms that can be used to train deep neural networks to be robust against natural variation in data, and to exploit such models in three novel model-based robust training algorithms in order to enhance the robustness of deep learning with respect to the given model.
Robustness to adversarial examples can be improved with overfitting
- 2020
Computer Science
Int. J. Mach. Learn. Cybern.
It is argued that the error in adversarial examples is caused by high bias, i.e. by regularization that has local negative effects, which ties the phenomenon to the trade-off that exists in machine learning between fitting and generalization.
Model-Based Robust Deep Learning: Generalizing to Natural, Out-of-Distribution Data.
- 2020
Computer Science
The objective is to provide general training algorithms that can be used to train deep neural networks to be robust against natural variation in data, and to exploit such models in three novel model-based robust training algorithms in order to enhance the robustness of deep learning with respect to the given model.
DDSA: A Defense Against Adversarial Attacks Using Deep Denoising Sparse Autoencoder
- 2019
Computer Science
IEEE Access
This paper proposes a novel defense solution based on a Deep Denoising Sparse Autoencoder (DDSA), which provides a high robustness against a set of prominent attacks under white-, gray- and black-box settings, and outperforms state-of-the-art defense methods.
Adversarial example detection for DNN models: a review and experimental comparison
- 2022
Computer Science
Artificial Intelligence Review
This paper focuses on image classification task and attempts to provide a survey for detection methods of test-time evasion attacks on neural network classifiers for adversarial examples detection.
MODEL-BASED ROBUST DEEP LEARNING: GENERAL-
- 2020
Computer Science
This work proposes a paradigm shift from perturbation-based adversarial robustness to model- based robust deep learning, and develops three novel model-based robust training algorithms that improve the robustness of DL with respect to natural variation.
32 References
Detecting Adversarial Samples from Artifacts
- 2017
Computer Science
ArXiv
This paper investigates model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model, and results show a method for implicit adversarial detection that is oblivious to the attack algorithm.
Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics
- 2017
Computer Science
2017 IEEE International Conference on Computer Vision (ICCV)
After detecting adversarial examples, it is shown that many of them can be recovered by simply performing a small average filter on the image, which should lead to more insights about the classification mechanisms in deep convolutional neural networks.
The Limitations of Deep Learning in Adversarial Settings
- 2016
Computer Science
2016 IEEE European Symposium on Security and Privacy (EuroS&P)
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs.
A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples
- 2016
Computer Science
ArXiv
It is shown that the adversarial strength observed in practice is directly dependent on the level of regularisation used and the strongest adversarial examples, symptomatic of overfitting, can be avoided by using a proper level ofRegularisation.
Explaining and Harnessing Adversarial Examples
- 2015
Computer Science
ICLR
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.
Foveation-based Mechanisms Alleviate Adversarial Examples
- 2015
Environmental Science
ArXiv
It is shown that adversarial examples, i.e., the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations---applying the CNN in different image regions, and corroborate that when the neural responses are linear, applying the foveation mechanism to the adversarial example tends to significantly reduce the effect of the perturbation.
Teaching iCub to recognize objects using deep Convolutional Neural Networks
- 2015
Computer Science
MLIS@ICML
This work investigates how latest results on deep learning can advance the visual recognition capabilities of a robotic platform (the iCub humanoid robot) in a real-world scenario and benchmarks the resulting system on a new dataset of images depicting 28 objects, named iCubWorld28, that is planned on releasing.
DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks
- 2016
Computer Science
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
The DeepFool algorithm is proposed to efficiently compute perturbations that fool deep networks, and thus reliably quantify the robustness of these classifiers, and outperforms recent methods in the task of computing adversarial perturbation and making classifiers more robust.
Adversarial examples in the physical world
- 2017
Computer Science
ICLR
It is found that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera, which shows that even in physical world scenarios, machine learning systems are vulnerable to adversarialExamples.
Intriguing properties of neural networks
- 2014
Computer Science
ICLR
It is found that there is no distinction between individual highlevel units and random linear combinations of high level units, according to various methods of unit analysis, and it is suggested that it is the space, rather than the individual units, that contains of the semantic information in the high layers of neural networks.