Corpus ID: 219708625

Formal Verification of End-to-End Learning in Cyber-Physical Systems: Progress and Challenges

  title={Formal Verification of End-to-End Learning in Cyber-Physical Systems: Progress and Challenges},
  author={Nathan Fulton and Nathan Hunt and N. Hoang and Subhro Das},
Autonomous systems -- such as self-driving cars, autonomous drones, and automated trains -- must come with strong safety guarantees. Over the past decade, techniques based on formal methods have enjoyed some success in providing strong correctness guarantees for large software systems including operating system kernels, cryptographic protocols, and control software for drones. These successes suggest it might be possible to ensure the safety of autonomous systems by constructing formal… Expand


Safe Reinforcement Learning via Formal Methods: Toward Safe Control Through Proof and Learning
It is proved that the approach toward incorporating knowledge about safe control into learning systems preserves safety guarantees, and it is demonstrated that the empirical performance benefits provided by reinforcement learning are retained. Expand
European Train Control System: A Case Study in Formal Verification
It is proved that the ETCS protocol remains correct even in the presence of perturbation by disturbances in the dynamics, and that safety is preserved when a PI controlled speed supervision is used. Expand
ModelPlex: Verified Runtime Validation of Verified Cyber-Physical System Models
ModelPlex is introduced, a method ensuring that verification results about models apply to CPS implementations and a systematic technique to synthesize provably correct monitors automatically from CPS proofs in differential dynamic logic. Expand
Towards verification of hybrid systems in a foundational proof assistant
A TLA-inspired formalism in Coq is defined and used to verify two Quadcopter modules: the first limits the quadcopter's velocity and the second limits its altitude, and they worked as intended. Expand
Towards Practical Verification of Machine Learning: The Case of Computer Vision Systems
A generic framework for evaluating security and robustness of ML systems using different real-world safety properties is proposed, and VeriVis, a scalable methodology that can verify a diverse set of safety properties for state-of-the-art computer vision systems with only blackbox access is evaluated. Expand
Numerical verification of affine systems with up to a billion dimensions
This paper improves the scalability of affine systems verification, in terms of the number of dimensions (variables) in the system, and produces accurate counter-examples when properties are violated and is shown to analyze a system with one billion real-valued state variables. Expand
Bellerophon: Tactical Theorem Proving for Hybrid Systems
This work presents a tactics language and library for hybrid systems verification, named Bellerophon, that provides a way to convey insights by programming hybrid systems proofs. Expand
Adaptive Cruise Control: Hybrid, Distributed, and Now Formally Verified
A formal model of a distributed car control system in which every car is controlled by adaptive cruise control is developed and it is verified that the control model satisfies its main safety objective and guarantees collision freedom for arbitrarily many cars driving on a street, even if new cars enter the lane from on-ramps or multi-lane streets. Expand
A Formally Verified Hybrid System for the Next-Generation Airborne Collision Avoidance System
The geometric configurations under which the advice given by ACAS X is safe under a precise set of assumptions are determined and formally verify these configurations using hybrid systems theorem proving techniques. Expand
Driving to safety: How many miles of driving would it take to demonstrate autonomous vehicle reliability?
How safe are autonomous vehicles? The answer is critical for determining how autonomous vehicles may shape motor vehicle safety and public health, and for developing sound policies to govern theirExpand