Corpus ID: 26914589

# Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger

@article{Behzadan2017WhateverDN,
title={Whatever Does Not Kill Deep Reinforcement Learning, Makes It Stronger},
author={Vahid Behzadan and Arslan Munir},
journal={ArXiv},
year={2017},
volume={abs/1712.09344}
}
• Published 2017
• Computer Science
• ArXiv
Recent developments have established the vulnerability of deep Reinforcement Learning (RL) to policy manipulation attacks via adversarial perturbations. In this paper, we investigate the robustness and resilience of deep RL to training-time and test-time attacks. Through experimental results, we demonstrate that under noncontiguous training-time attacks, Deep Q-Network (DQN) agents can recover and adapt to the adversarial conditions by reactively adjusting the policy. Our results also show that… Expand

#### Figures and Topics from this paper

Robust Deep Reinforcement Learning against Adversarial Perturbations on Observations
• Computer Science
• ArXiv
• 2020
The proposed training procedure significantly improves the robustness of DQN and DDPG agents under a suite of strong white box attacks on observations, including a few novel attacks the authors specifically craft. Expand
Real-time Attacks Against Deep Reinforcement Learning Policies
An effective detection technique is proposed which can form the basis for robust defenses against attacks based on universal perturbations and is effective, as it fully degrades the performance of both deterministic and stochastic policies. Expand
Online Robustness Training for Deep Reinforcement Learning
• Computer Science, Mathematics
• ArXiv
• 2019
This work shows that RS-DQN can be combined with state-of-the-art adversarial training and provably robust training to obtain an agent that is resilient to strong attacks during training and evaluation. Expand
Analysis and Improvement of Adversarial Training in DQN Agents With Adversarially-Guided Exploration (AGE)
• Computer Science, Mathematics
• ArXiv
• 2019
This paper investigates the effectiveness of adversarial training in enhancing the robustness of Deep Q-Network policies to state-space perturbations, and proposes a novel Adversarially-Guided Exploration (AGE) mechanism based on a modified hybrid of the $\epsilon$-greedy algorithm and Boltzmann exploration. Expand
Learning to Cope with Adversarial Attacks
• Computer Science, Mathematics
• ArXiv
• 2019
The results shows that the MLAH agent exhibits interesting coping behaviors when subjected to different adversarial attacks to maintain a nominal reward, and the framework exhibits a hierarchical coping capability, based on the adaptability of the Master policy and sub-policies themselves. Expand
Mitigation of Policy Manipulation Attacks on Deep Q-Networks with Parameter-Space Noise
• Computer Science, Mathematics
• SAFECOMP Workshops
• 2018
This work experimentally verify the effect of parameter-space noise in reducing the transferability of adversarial examples, and demonstrates the promising performance of this technique in mitigating the impact of whitebox and blackbox attacks at both test and training times. Expand
SERVATIONS WITH LEARNED OPTIMAL ADVERSARY
• Huan Zhang, Cho-Jui Hsieh
• 2021
We study the robustness of reinforcement learning (RL) with adversarially perturbed state observations, which aligns with the setting of many adversarial attacks to deep reinforcement learning (DRL)Expand
Robust Reinforcement Learning on State Observations with Learned Optimal Adversary
• Computer Science, Mathematics
• ICLR
• 2021
A framework of alternating training with learned adversaries (ATLA) is proposed, which trains an adversary online together with the agent using policy gradient following the optimal adversarial attack framework, and it is demonstrated that an optimal adversary to perturb state observations can be found. Expand
Adversarial Attacks on Deep Algorithmic Trading Policies
• Computer Science, Economics
• ArXiv
• 2020
A threat model for deep trading policies is developed, and two attack techniques for manipulating the performance of such policies at test-time are proposed, demonstrating the effectiveness of the proposed attacks against benchmark and real-world DQN trading agents. Expand
RL-VAEGAN: Adversarial defense for reinforcement learning agents via style transfer
• Computer Science
• Knowl. Based Syst.
• 2021
This paper investigates the adversarial robustness of RL agents and proposes a novel defense framework for RL based on the idea of style transfer, called RL-VAEGAN, which naturally eliminates the threat of adversarial attacks for RL agents by transferring adversarial states to unperturbed legitimate one under the shared-content latent space assumption. Expand

#### References

SHOWING 1-10 OF 31 REFERENCES
Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks
• Computer Science, Mathematics
• MLDM
• 2017
This work establishes that reinforcement learning techniques based on Deep Q-Networks are also vulnerable to adversarial input perturbations, and presents a novel class of attacks based on this vulnerability that enable policy manipulation and induction in the learning process of DQNs. Expand
Adversarially Robust Policy Learning: Active construction of physically-plausible perturbations
• Computer Science
• 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
• 2017
This work introduces Adversarially Robust Policy Learning (ARPL), an algorithm that leverages active computation of physically-plausible adversarial examples during training to enable robust policy learning in the source domain and robust performance under both random and adversarial input perturbations. Expand
Delving into adversarial attacks on deep policies
• Computer Science, Mathematics
• ICLR
• 2017
This paper presents a novel method for reducing the number of times adversarial examples need to be injected for a successful attack, based on the value function, and explores how re-training on random noise and FGSM perturbations affects the resilience against adversarialExamples. Expand
Adversarial Attacks on Neural Network Policies
• Computer Science, Mathematics
• ICLR
• 2017
This work shows existing adversarial example crafting techniques can be used to significantly degrade test-time performance of trained policies, even with small adversarial perturbations that do not interfere with human perception. Expand
Detecting Adversarial Attacks on Neural Network Policies with Visual Foresight
• Computer Science
• ArXiv
• 2017
This paper proposes a defense mechanism to defend reinforcement learning agents from adversarial attacks by leveraging an action-conditioned frame prediction module and demonstrates that the proposed defense mechanism achieves favorable performance against baseline algorithms in detecting adversarial examples and in earning rewards when the agents are under attack. Expand
Practical Black-Box Attacks against Deep Learning Systems using Adversarial Examples
• Computer Science
• ArXiv
• 2016
This work introduces the first practical demonstration that cross-model transfer phenomenon enables attackers to control a remotely hosted DNN with no access to the model, its parameters, or its training data, and introduces the attack strategy of fitting a substitute model to the input-output pairs in this manner, then crafting adversarial examples based on this auxiliary model. Expand
Explaining and Harnessing Adversarial Examples
• Computer Science, Mathematics
• ICLR
• 2015
It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets. Expand
The Limitations of Deep Learning in Adversarial Settings
• Computer Science, Mathematics
• 2016 IEEE European Symposium on Security and Privacy (EuroS&P)
• 2016
This work formalizes the space of adversaries against deep neural networks (DNNs) and introduces a novel class of algorithms to craft adversarial samples based on a precise understanding of the mapping between inputs and outputs of DNNs. Expand
Parameter Space Noise for Exploration
This work demonstrates that RL with parameter noise learns more efficiently than traditional RL with action space noise and evolutionary strategies individually through experimental comparison of DQN, DDPG, and TRPO on high-dimensional discrete action environments as well as continuous control tasks. Expand
Practical Black-Box Attacks against Machine Learning
• Computer Science
• AsiaCCS
• 2017
This work introduces the first practical demonstration of an attacker controlling a remotely hosted DNN with no such knowledge, and finds that this black-box attack strategy is capable of evading defense strategies previously found to make adversarial example crafting harder. Expand