Tactics of Adversarial Attack on Deep Reinforcement Learning Agents

@inproceedings{Lin2017TacticsOA,
  title={Tactics of Adversarial Attack on Deep Reinforcement Learning Agents},
  author={Yen-Chen Lin and Zhang-Wei Hong and Yuan-Hong Liao and Meng-Li Shih and Ming-Yu Liu and Min Sun},
  booktitle={IJCAI},
  year={2017}
}
  • Yen-Chen Lin, Zhang-Wei Hong, +3 authors Min Sun
  • Published in IJCAI 2017
  • Computer Science, Mathematics
  • We introduce two tactics to attack agents trained by deep reinforcement learning algorithms using adversarial examples, namely the strategically-timed attack and the enchanting attack. In the strategically-timed attack, the adversary aims at minimizing the agent's reward by only attacking the agent at a small subset of time steps in an episode. Limiting the attack activity to this subset helps prevent detection of the attack by the agent. We propose a novel method to determine when an… CONTINUE READING

    Citations

    Publications citing this paper.
    SHOWING 1-10 OF 115 CITATIONS

    Adversarial Policies: Attacking Deep Reinforcement Learning

    VIEW 1 EXCERPT
    CITES BACKGROUND

    Stealthy and Efficient Adversarial Attacks against Deep Reinforcement Learning

    VIEW 9 EXCERPTS
    CITES BACKGROUND, METHODS & RESULTS
    HIGHLY INFLUENCED

    Blackbox Attacks on Reinforcement Learning Agents Using Approximated Temporal Information

    • Yiren Zhao, Ilia Shumailov, +3 authors Ross Anderson
    • Computer Science, Mathematics
    • 2020 50th Annual IEEE/IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W)
    • 2020
    VIEW 4 EXCERPTS
    CITES BACKGROUND
    HIGHLY INFLUENCED

    Optimal Attacks on Reinforcement Learning Policies

    VIEW 3 EXCERPTS
    CITES METHODS

    Snooping Attacks on Deep Reinforcement Learning

    VIEW 5 EXCERPTS
    CITES RESULTS & BACKGROUND
    HIGHLY INFLUENCED

    Enhanced Adversarial Strategically-Timed Attacks Against Deep Reinforcement Learning

    • Chao-Han Huck Yang, Jun Qi, +4 authors Xiaoli Ma
    • Computer Science, Engineering, Mathematics
    • ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
    • 2020
    VIEW 10 EXCERPTS
    CITES METHODS & BACKGROUND
    HIGHLY INFLUENCED

    FILTER CITATIONS BY YEAR

    2017
    2020

    CITATION STATISTICS

    • 9 Highly Influenced Citations

    • Averaged 36 Citations per year from 2018 through 2020

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 28 REFERENCES

    Adversarial Attacks on Neural Network Policies

    VIEW 7 EXCERPTS
    HIGHLY INFLUENTIAL

    Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks

    VIEW 5 EXCERPTS
    HIGHLY INFLUENTIAL

    The Limitations of Deep Learning in Adversarial Settings

    VIEW 4 EXCERPTS
    HIGHLY INFLUENTIAL

    Adversarial examples in the physical world

    VIEW 2 EXCERPTS

    Towards Evaluating the Robustness of Neural Networks

    VIEW 6 EXCERPTS
    HIGHLY INFLUENTIAL

    Explaining and Harnessing Adversarial Examples

    VIEW 10 EXCERPTS
    HIGHLY INFLUENTIAL

    Towards Robust Deep Neural Networks with BANG

    Action-Gap Phenomenon in Reinforcement Learning

    VIEW 1 EXCERPT