Safe Reinforcement Learning via Shielding

@article{Alshiekh2018SafeRL,
  title={Safe Reinforcement Learning via Shielding},
  author={Mohammed Alshiekh and Roderick Bloem and R{\"u}diger Ehlers and Bettina K{\"o}nighofer and Scott Niekum and Ufuk Topcu},
  journal={ArXiv},
  year={2018},
  volume={abs/1708.08611}
}
Reinforcement learning algorithms discover policies that maximize reward, but do not necessarily guarantee safety during learning or execution phases. [] Key Method The shield is introduced in the traditional learning process in two alternative ways, depending on the location at which the shield is implemented. In the first one, the shield acts each time the learning agent is about to make a decision and provides a list of safe actions. In the second way, the shield is introduced after the learning agent…

AlwaysSafe: Reinforcement Learning without Safety Constraint Violations during Training

TLDR
This work proposes an RL algorithm that uses this concise abstract model of the safety aspects of reinforcement learning to learn policies for CMDPs safely, that is without violating the con- straints, and proves that this algorithm is safe under the given assumptions.

Safe Reinforcement Learning via Probabilistic Timed Computation Tree Logic

  • Li QianJing Liu
  • Computer Science
    2020 International Joint Conference on Neural Networks (IJCNN)
  • 2020
TLDR
A safe algorithm called Safe Control with Supervisor (SCS), which monitors the system and repairs the action of the agent at runtime, which guides the system to obey the specification described by probabilistic timed Computation Tree Logic (ptCTL).

Safe Distributional Reinforcement Learning

TLDR
This paper formalizes safety in reinforcement learning with a constrained RL formulation in the distributional RL setting and empirically validate its propositions on artificial and real domains against appropriate state-of-the-art safe RL algorithms.

Safe Reinforcement Learning via Shielding under Partial Observability

TLDR
It is shown that a carefully integrated shield ensures safety and can improve the convergence rate and performance of RL agents, and that a shield can be used to bootstrap state-of-the-art RL agents.

Verifiably Safe Off-Model Reinforcement Learning

TLDR
This paper introduces verification-preserving model updates, the first approach toward obtaining formal safety guarantees for reinforcement learning in settings where multiple environmental models must be taken into account, through a combination of design-time model updates and runtime model falsification.

Safe Reinforcement Learning Using Probabilistic Shields

TLDR
The concept of a probabilistic shield that enables RL decision-making to adhere to safety constraints with high probability is introduced and used to realize a shield that restricts the agent from taking unsafe actions, while optimizing the performance objective.

Safe Reinforcement Learning via Statistical Model Predictive Shielding

TLDR
This work proves that SMPS ensures safety with high probability, and empirically evaluate its performance on several benchmarks.

Verifiably safe exploration for end-to-end reinforcement learning

TLDR
A first approach toward enforcing formal safety constraints on end-to-end policies with visual inputs is contributed, drawing on recent advances in object detection and automated reasoning for hybrid dynamical systems.

Safe Reinforcement Learning via Probabilistic Shields

TLDR
This paper introduces the concept of a probabilistic shield that enables decision-making to adhere to safety constraints with high probability and discusses tradeoffs between sufficient progress in exploration of the environment and ensuring safety.

Safe Reinforcement Learning Using Advantage-Based Intervention

TLDR
This work proposes a new algorithm, SAILR, that uses an intervention mechanism based on advantage functions to keep the agent safe throughout training and optimizes the agent’s policy using off-the-shelf RL algorithms designed for unconstrained MDPs.
...

References

SHOWING 1-10 OF 27 REFERENCES

Safety-Constrained Reinforcement Learning for MDPs

TLDR
This work abstracts controller synthesis for stochastic and partially unknown environments in which safety is essential as a Markov decision process in which the expected performance is measured using a cost function that is unknown prior to run-time exploration of the state space.

A comprehensive survey on safe reinforcement learning

TLDR
This work categorize and analyze two approaches of Safe Reinforcement Learning, based on the modification of the optimality criterion, the classic discounted finite/infinite horizon, with a safety factor and the incorporation of external knowledge or the guidance of a risk metric.

Safe Exploration Techniques for Reinforcement Learning - An Overview

TLDR
This work overviews different approaches to safety in (semi)autonomous robotics and addresses the issues of how to define safety in the real-world applications (apparently absolute safety is unachievable in the continuous and random real world).

Correct-by-synthesis reinforcement learning with temporal logic constraints

TLDR
This work considers a problem on the synthesis of optimal reactive controllers with an a priori unknown performance criterion while satisfying a given temporal logic specification through the interaction with an uncontrolled environment, and presents an algorithm to the overall problem.

Safe Exploration in Markov Decision Processes

TLDR
This paper proposes a general formulation of safety through ergodicity, and shows that imposing safety by restricting attention to the resulting set of guaranteed safe policies is NP-hard, and presents an efficient algorithm for guaranteed safe, but potentially suboptimal, exploration.

Learning on real robots from experience and simple user feedback

TLDR
A novel algorithm is described that allows fast and continuous learning on a physical robot working in a real environment and lets a human observer control the reward given to the robot, hence avoiding the burden of defining a reward function.

The Arcade Learning Environment: An Evaluation Platform for General Agents (Extended Abstract)

TLDR
The promise of ALE is illustrated by developing and benchmarking domain-independent agents designed using well-established AI techniques for both reinforcement learning and planning, and an evaluation methodology made possible by ALE is proposed.

Reinforcement Learning with Human Teachers: Evidence of Feedback and Guidance with Implications for Learning Performance

TLDR
The importance of understanding the human-teacher/robot-learner system as a whole in order to design algorithms that support how people want to teach while simultaneously improving the robot's learning performance is demonstrated.

Receding Horizon Temporal Logic Planning

TLDR
A response mechanism to handle failures that may occur due to a mismatch between the actual system and its model and the corresponding receding horizon framework that effectively reduces the synthesis problem into a set of smaller problems.

Reinforcement Learning: An Introduction

TLDR
This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.