Self-Supervised Online Learning for Safety-Critical Control using Stereo Vision

  title={Self-Supervised Online Learning for Safety-Critical Control using Stereo Vision},
  author={Ryan K. Cosner and Ivan Dario Jimenez Rodriguez and Tam{\'a}s G. Moln{\'a}r and Wyatt Ubellacker and Yisong Yue and A. Ames and Katherine L. Bouman},
With the increasing prevalence of complex visionbased sensing methods for use in obstacle identification and state estimation, characterizing environment-dependent measurement errors has become a difficult and essential part of modern robotics. This paper presents a self-supervised learning approach to safety-critical control. In particular, the uncertainty associated with stereo vision is estimated, and adapted online to new visual environments, wherein this estimate is leveraged in a safety… 

Figures from this paper


Robust Guarantees for Perception-Based Control
This work shows that under suitable smoothness assumptions on the perception map and generative model relating state to high-dimensional data, an affine error model is sufficiently rich to capture all possible error profiles, and can be learned via a robust regression problem.
Measurement-Robust Control Barrier Functions: Certainty in Safety with Uncertainty in State
A rigorous framework for safety-critical control of systems with erroneous state estimates is proposed by leveraging Control Barrier Functions and unifying the method of Backup Sets for synthesizing control invariant sets with robustness requirements, which provides theoretical guarantees on safe behavior in the presence of imperfect measurements and improved robustness over standard CBF approaches.
Guaranteed Safe Online Learning via Reachability: tracking a ground target using a quadrotor
  • J. Gillula, C. Tomlin
  • Computer Science
    2012 IEEE International Conference on Robotics and Automation
  • 2012
The GSOLR framework can be applied to a target tracking problem, in which an observing quadrotor helicopter must keep a target ground vehicle with unknown (but bounded) dynamics inside its field of view at all times, while simultaneously attempting to build a motion model of the target.
Guaranteeing Safety of Learned Perception Modules via Measurement-Robust Control Barrier Functions
The notion of a Measurement-Robust Control Barrier Function (MR-CBF) is defined as a tool for determining safe control inputs when facing measurement model uncertainty and is used to inform sampling methodologies for learning-based perception systems and quantify tolerable error in the resulting learned models.
Online learning for characterizing unknown environments in ground robotic vehicle models
An online learning algorithm is used to fit a statistical model of error that provides enough expressive power to enable prediction directly from motion control signals and low-level visual features and compares favorably to predictors that do not incorporate this information.
Improving robot navigation through self‐supervised online learning
An online, probabilistic model is introduced to provide an efficient, self‐supervised learning method that accurately predicts traversal costs over large areas from overhead data and can significantly improve the versatility of many unmanned ground vehicles by allowing them to traverse highly varied terrains with increased performance.
Barrier-Certified Adaptive Reinforcement Learning With Applications to Brushbot Navigation
A safe learning framework that employs an adaptive model learning algorithm together with barrier certificates for systems with possibly nonstationary agent dynamics, and solutions to the barrier-certified policy optimization are guaranteed to be globally optimal, ensuring the greedy policy improvement under mild conditions.
Model-Free Safety-Critical Control for Robotic Systems
To maintain safety, a safe velocity is synthesized based on control barrier function theory without relying on a – potentially complicated – high-fidelity dynamical model of the robot, which culminates in model-free safety critical control.
Reinforcement Learning for Safety-Critical Control under Model Uncertainty, using Control Lyapunov Functions and Control Barrier Functions
A novel reinforcement learning framework is proposed which learns the model uncertainty present in the CBF and CLF constraints, as well as other control-affine dynamic constraints in the quadratic program.
Probabilistic representation of the uncertainty of stereo-vision and application to obstacle detection
A probabilistic representation of the specific uncertainty for stereo-vision is proposed, which takes advantage of both aspects - distance and disparity, and a computationally-efficient implementation based on the u-disparity approach is given.