@article{Lechner2021AdversarialTI,
author={Mathias Lechner and Ramin M. Hasani and Radu Grosu and Daniela Rus and Thomas A. Henzinger},
journal={2021 IEEE International Conference on Robotics and Automation (ICRA)},
year={2021},
pages={4140-4147}
}
• Published 15 March 2021
• Computer Science
• 2021 IEEE International Conference on Robotics and Automation (ICRA)
Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations, with the cost of nominal performance drop. While adversarial training appears to enhance the robustness and safety of a deep model deployed in open-world decision-critical applications, counterintuitively, it induces undesired behaviors in robot learning settings. In this paper, we show theoretically and experimentally that neural controllers obtained via adversarial…
15 Citations

## Figures and Tables from this paper

• Computer Science
ArXiv
• 2022
This work revisits the robustness-accuracy trade-off in robot learning through systematically analyzing if recent advances in robust training methods and theory in conjunction with adversarial robot learning, are capable of making adversarial training suitable for real-world robot applications.

### Adversarially Regularized Policy Learning Guided by Trajectory Optimization

• Computer Science
L4DC
• 2022
The proposed approach controls the smoothness (local Lipschitz continuity) of the neural control policies by stabilizing the output control with respect to the worst-case perturbation to the input state.

• Computer Science
NeurIPS
• 2021
It is shown that minimizing adversarial risk on the perturbed data is equivalent to optimizing an upper bound of natural risk onThe original data, which implies that adversarial training can serve as a principled defense against delusive attacks.

### Latent Imagination Facilitates Zero-Shot Transfer in Autonomous Racing

• Computer Science
2022 International Conference on Robotics and Automation (ICRA)
• 2022
This paper investigates how model-based agents capable of learning in imagination substantially outperform model-free agents with respect to performance, sample efficiency, successful task completion, and generalization in real-world autonomous vehicle control tasks, where advanced model- free deep RL algorithms fail.

### Rethink the Adversarial Scenario-based Safety Testing of Robots: the Comparability and Optimal Aggressiveness

• Computer Science
ArXiv
• 2022
This paper disputes the above intuition by introducing an impossibility theorem that provably shows all safety testing algorithms of the aforementioned difference perform equally well with the same expected sampling efﬁciency.

### Interactive Analysis of CNN Robustness

• Computer Science
Comput. Graph. Forum
• 2021
Perturber is a web‐based application that allows users to instantaneously explore how CNN activations and predictions evolve when a 3D input scene is interactively perturbed, and replicate users’ insights with other CNN architectures and input images, yielding new insights about the vulnerability of adversarially trained models.

### BarrierNet: A Safety-Guaranteed Layer for Neural Networks

• Computer Science
ArXiv
• 2021
These novel safety layers, termed a BarrierNet, can be used in conjunction with any neural network-based controller, and can be trained by gradient descent, which allows the safety constraints of a neural controller be adaptable to changing environments.

### Causal Navigation by Continuous-time Neural Networks

• Computer Science
NeurIPS
• 2021
The results demonstrate that causal continuous-time deep models can perform robust navigation tasks, where advanced recurrent models fail, and learn complex causal control representations directly from raw visual inputs and scale to solve a variety of tasks using imitation learning.

### Beyond Robustness: A Taxonomy of Approaches towards Resilient Multi-Robot Systems

• Computer Science
ArXiv
• 2021
This survey article analyzed how resilience is achieved in networks of agents and multirobot systems that are able to overcome adversity by leveraging system-wide complementarity, diversity, and redundancy—often involving a reconfiguration of robotic capabilities to provide some key ability that was not present in the system a priori.

### Sparse Flows: Pruning Continuous-depth Models

• Computer Science
NeurIPS
• 2021
This work designs a framework to decipher the internal dynamics of these continuous depth models by pruning their network architectures, and empirical results suggest that pruning improves generalization for neural ODEs in generative modeling.

## References

SHOWING 1-10 OF 66 REFERENCES

### Training Adversarial Agents to Exploit Weaknesses in Deep Control Policies

• Computer Science
2020 IEEE International Conference on Robotics and Automation (ICRA)
• 2020
An automated black box testing framework based on adversarial reinforcement learning is proposed, which is able to find weaknesses in both control policies that were not evident during online testing and therefore, demonstrate a significant benefit over manual testing methods.

### Feature Purification: How Adversarial Training Performs Robust Deep Learning

• Computer Science
2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS)
• 2022
A complexity lower bound is proved, showing that low complexity models such as linear classifiers, low-degree polynomials, or even the neural tangent kernel for this network, cannot defend against perturbations of this same radius, no matter what algorithms are used to train them.

### Risk Averse Robust Adversarial Reinforcement Learning

• Computer Science
2019 International Conference on Robotics and Automation (ICRA)
• 2019
It is shown through experiments that a risk-averse agent is better equipped to handle arisk-seeking adversary, and experiences substantially fewer crashes compared to agents trained without an adversary.

### Provable defenses against adversarial examples via the convex outer adversarial polytope

• Computer Science
ICML
• 2018
A method to learn deep ReLU-based classifiers that are provably robust against norm-bounded adversarial perturbations, and it is shown that the dual problem to this linear program can be represented itself as a deep network similar to the backpropagation network, leading to very efficient optimization approaches that produce guaranteed bounds on the robust loss.

### Adversarial Feature Training for Generalizable Robotic Visuomotor Control

• Computer Science, Biology
2020 IEEE International Conference on Robotics and Automation (ICRA)
• 2020
It is demonstrated that by using adversarial training for domain transfer, it is possible to train visuomotor policies based on RL frameworks, and then transfer the acquired policy to other novel task domains, and the method is evaluated on two real robotic tasks, picking and pouring, to demonstrate its superiority.

### Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers

• Computer Science
NeurIPS
• 2019
It is demonstrated through extensive experimentation that this method consistently outperforms all existing provably $\ell-2$-robust classifiers by a significant margin on ImageNet and CIFAR-10, establishing the state-of-the-art for provable $\ell_ 2$-defenses.

### Adversarial Machine Learning at Scale

• Computer Science
ICLR
• 2017
This research applies adversarial training to ImageNet and finds that single-step attacks are the best for mounting black-box attacks, and resolution of a "label leaking" effect that causes adversarially trained models to perform better on adversarial examples than on clean examples.

### Towards Deep Learning Models Resistant to Adversarial Attacks

• Computer Science
ICLR
• 2018
This work studies the adversarial robustness of neural networks through the lens of robust optimization, and suggests the notion of security against a first-order adversary as a natural and broad security guarantee.

### Disentangling Adversarial Robustness and Generalization

• Computer Science
2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
• 2019
This work assumes an underlying, low-dimensional data manifold and shows that regular robustness and generalization are not necessarily contradicting goals, which implies that both robust and accurate models are possible.

### Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning

• Computer Science
IEEE Transactions on Pattern Analysis and Machine Intelligence
• 2019
A new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input that achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.