• Corpus ID: 231603061

Learning Safe Multi-Agent Control with Decentralized Neural Barrier Certificates

  title={Learning Safe Multi-Agent Control with Decentralized Neural Barrier Certificates},
  author={Zengyi Qin and K. Zhang and Yuxiao Chen and Jingkai Chen and Chuchu Fan},
We study the multi-agent safe control problem where agents should avoid collisions to static obstacles and collisions with each other while reaching their goals. Our core idea is to learn the multi-agent control policy jointly with learning the control barrier functions as safety certificates. We propose a novel joint-learning framework that can be implemented in a decentralized fashion, with generalization guarantees for certain function classes. Such a decentralized framework can adapt to an… 

Figures and Tables from this paper

Sablas: Learning Safe Control for Black-Box Dynamical Systems
This paper proposes a novel method that can learn safe control policies and barrier certificates for black-box dynamical systems, without requiring for an accurate system model, and shows that the safety certificates hold on the black- box system.
Overcoming Exploration: Deep Reinforcement Learning in Complex Environments from Temporal Logic Specifications
This work presents a Deep Reinforcement Learning algorithm for a task-guided robot with unknown continuous-time dynamics deployed in a large-scale complex environment and proposes a novel path planning-guided reward scheme that is dense over the state space and robust to infeasibility of computed geometric paths due to the unknown robot dynamics.
Joint Synthesis of Safety Certificate and Safe Control Policy using Constrained Reinforcement Learning
A novel approach that simultaneously synthesizes the energy-function-based safety certificates and learns the safe control policies with constrained reinforcement learning (CRL) and demonstrates that the proposed FAC-SIS synthesizes a valid safe index while learning a safe control policy.
Electric Propulsion Intelligent Control (EPIC) Toolbox for Proximity Operations and Safety Analysis in Low-Earth Orbit (LEO)
The main goal of this research is to build an optimal toolset to enable mission trajectory planning using low-thrust platforms. More specifically, the Electric Propulsion Intelligent Control (EPIC)
Reactive and Safe Road User Simulations using Neural Barrier Certificates
This work proposed a reactive agent model which can ensure safety without comprising the original purposes, by learning only high-level decisions from expert data and a low level decentralized controller guided by the jointly learned decentralized barrier certificates.
An Analytical Framework for Control Synthesis of Cyber-Physical Systems with Safety Guarantee
This paper constructs a hybrid system that models CPS adopting any of the simplex, BFT++ and other practical cyber resilient architectures (CRAs), and derivescient conditions via the proposed framework under which a control policy is guaranteed to be safe.
Safe Control with Learned Certificates: A Survey of Neural Lyapunov, Barrier, and Contraction methods
This paper hopes that this paper will serve as an accessible introduction to the theory and practice of certificate learning, both to those who wish to apply these tools to practical robotics problems and to thosewho wish to dive more deeply into the theory of learning for control.
Joint Differentiable Optimization and Verification for Certified Reinforcement Learning
This work proposes a framework that jointly conducts reinforcement learning and formal verification by formulating and solving a novel bilevel optimization problem, which is differentiable by the gradients from the value function and certificates.
Detecting danger in gridworlds using Gromov's Link Condition
A modification to the original Abrams, Ghrist & Peterson setup is introduced to capture agent braiding and thereby more naturally represent the topology of gridworlds, which provides a novel method for seeking guaranteed safety limitations in discrete task environments with single or multiple agents.
Learning Safe, Generalizable Perception-based Hybrid Control with Certificates
This work introduces a novel learning-enabled perception-feedback hybrid controller, called LOCUS (Learning-enabled Observationfeedback Control Using Switching), which can safely navigate unknown environments, consistently reach its goal, and generalizes safely to environments outside of the training dataset.


PIC: Permutation Invariant Critic for Multi-Agent Deep Reinforcement Learning
This work proposes a 'permutation invariant critic' (PIC), which yields identical output irrespective of the agent permutation, which enables the model to scale to 30 times more agents and to achieve improvements of test episode reward between 15% to 50% on the challenging multi-agent particle environment (MPE).
Control barrier function based quadratic programs with application to adaptive cruise control
A control methodology that unifies control barrier functions and control Lyapunov functions through quadratic programs is developed, which allows for the simultaneous achievement of control objectives subject to conditions on the admissible states of the system.
Learning Stability Certificates from Data
It is demonstrated empirically that certificates for complex dynamics can be efficiently learned, and that the learned certificates can be used for downstream tasks such as adaptive control.
Neural Certificates for Safe Control Policies
The safety means that a policy must not drive the state of the system to any unsafe region, while the goal-reaching requires the trajectory of the controlled system asymptotically converges to a goal region (a generalization of stability).
MAMPS: Safe Multi-Agent Reinforcement Learning via Model Predictive Shielding
This work proposes multi-agent model predictive shielding (MAMPS), an algorithm that provably guarantees safety for an arbitrary learned policy and operates by using the learned policy as often as possible, but instead uses a backup policy in cases where it cannot guarantee the safety of the learning policy.
Searching with Consistent Prioritization for Multi-Agent Path Finding
This work explores the space of all possible partial priority orderings as part of a novel systematic and conflict-driven combinatorial search framework and develops new theoretical results that explore the limitations of prioritized planning, in terms of completeness and optimality, for the first time.
Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments
An adaptation of actor-critic methods that considers action policies of other agents and is able to successfully learn policies that require complex multi-agent coordination is presented.
Safety Barrier Certificates for Collisions-Free Multirobot Systems
This paper presents safety barrier certificates that ensure scalable and provably collision-free behaviors in multirobot systems by modifying the nominal controllers to formally satisfy safety
Scalable and Safe Multi-Agent Motion Planning with Nonlinear Dynamics and Bounded Disturbances
We present a scalable and effective multi-agent safe motion planner that enables a group of agents to move to their desired locations while avoiding collisions with obstacles and other agents, with
Fast and Guaranteed Safe Controller Synthesis for Nonlinear Vehicle Models
The problem of synthesizing a controller for nonlinear systems with reach-avoid requirements is addressed and a method that can find a reference trajectory by solving a satisfiability problem over linear constraints is proposed.