Shielding Atari Games with Bounded Prescience

  title={Shielding Atari Games with Bounded Prescience},
  author={Mirco Giacobbe and Mohammadhosein Hasanbeig and Daniel Kroening and Hjalmar Wijk},
Deep reinforcement learning (DRL) is applied in safety-critical domains such as robotics and autonomous driving. It achieves superhuman abilities in many tasks, however whether DRL agents can be shown to act safely is an open problem. Atari games are a simple yet challenging exemplar for evaluating the safety of DRL agents and feature a diverse portfolio of game mechanics. The safety of neural agents has been studied before using methods that either require a model of the system dynamics or an… 
Safe Deployment of a Reinforcement Learning Robot Using Self Stabilization
This work defines a condition on the state and action spaces, that if satisfied, guarantees the robot's recovery to safety independently, and proposes a strategy and design that facilitate this recovery within a finite number of steps after perturbation.
Do Androids Dream of Electric Fences? Safety-Aware Reinforcement Learning with Latent Shielding
This work presents a novel approach to safetyaware deep reinforcement learning in high-dimensional environments called latent shielding, which leverages internal representations of the environment learnt by modelbased agents to “imagine” future trajectories and avoid those deemed unsafe.
Learning a Shield from Catastrophic Action Effects: Never Repeat the Same Mistake
A variant of the PPO algorithm that utilizes a shield which prevents agents from executing specific actions from specific states, called ShieldPPO, is introduced and empirically evaluates it in a controlled environment, indicating that Shield PPO outperforms PPO, as well as baseline methods from the safe reinforcement learning literature, in a range of settings.


Is Deep Reinforcement Learning Really Superhuman on Atari?
This work introduces SABER, a Standardized Atari BEnchmark for general Reinforcement learning algorithms and uses it to evaluate the current state of the art, Rainbow, and introduces a human world records baseline, and argues that previous claims of expert or superhuman performance of DRL might not be accurate.
Deep Reinforcement Learning: A Brief Survey
This survey will cover central algorithms in deep RL, including the deep Q-network (DQN), trust region policy optimization (TRPO), and asynchronous advantage actor critic, and highlight the unique advantages of deep neural networks, focusing on visual understanding via RL.
Grandmaster level in StarCraft II using multi-agent reinforcement learning
The agent, AlphaStar, is evaluated, which uses a multi-agent reinforcement learning algorithm and has reached Grandmaster level, ranking among the top 0.2% of human players for the real-time strategy game StarCraft II.
Trial without Error: Towards Safe Reinforcement Learning via Human Intervention
This work formalizes human intervention for RL and shows how to reduce the human labor required by training a supervised learner to imitate the human's intervention decisions, and outlines extensions of the scheme that are necessary if the authors are to train model-free agents without a single catastrophe.
Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates
It is demonstrated that a recent deep reinforcement learning algorithm based on off-policy training of deep Q-functions can scale to complex 3D manipulation tasks and can learn deep neural network policies efficiently enough to train on real physical robots.
Safe Reinforcement Learning with Nonlinear Dynamics via Model Predictive Shielding
  • Osbert Bastani
  • Computer Science
    2021 American Control Conference (ACC)
  • 2021
This work proposes an approach, called model predictive shielding (MPS), that switches on-the-fly between a learned policy and a backup policy to ensure safety, and proves that the approach guaranteesSafety, and empirically evaluate it on the cart-pole.
Online Shielding for Stochastic Systems
A method to develop trustworthy reinforcement learning systems that automatically synthesize a correct-by-construction runtime enforcer, called a shield, that blocks all actions that are unsafe with respect to a temporal logic specification from the agent.
Human-level control through deep reinforcement learning
This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks.
Formal Methods with a Touch of Magic
This work synthesizes a stand-alone correct-by-design controller that enjoys the favorable performance of RL, and incorporates a magic book in a bounded model checking (BMC) procedure, which allows us to find numerous traces of the plant under the control of the wizard.
Safe Reinforcement Learning Using Probabilistic Shields
The concept of a probabilistic shield that enables RL decision-making to adhere to safety constraints with high probability is introduced and used to realize a shield that restricts the agent from taking unsafe actions, while optimizing the performance objective.