The Open Vision Computer: An Integrated Sensing and Compute System for Mobile Robots

@article{Quigley2019TheOV,
  title={The Open Vision Computer: An Integrated Sensing and Compute System for Mobile Robots},
  author={Morgan Quigley and Kartik Mohta and Shreyas S. Shivakumar and Michael Watterson and Yash Mulgaonkar and Mikael Arguedas and Ke Sun and Sikang Liu and Bernd Pfrommer and Vijay R. Kumar and Camillo Jose Taylor},
  journal={2019 International Conference on Robotics and Automation (ICRA)},
  year={2019},
  pages={1834-1840}
}
  • M. Quigley, K. Mohta, +8 authors C. J. Taylor
  • Published 20 September 2018
  • Computer Science
  • 2019 International Conference on Robotics and Automation (ICRA)
In this paper we describe the Open Vision Computer (OVC) which was designed to support high speed, vision guided autonomous drone flight. In particular our aim was to develop a system that would be suitable for relatively small-scale flying platforms where size, weight, power consumption and computational performance were all important considerations. This manuscript describes the primary features of our OVC system and explains how they are used to support fully autonomous indoor and outdoor… Expand
State Estimation, Control, And Planning For A Quadrotor Team
TLDR
This dissertation addresses the problem of developing a team of autonomous quadrotors that can be quickly deployed and controlled by a single human operator and proposes a method that uses the relative position/bearing measurements of nearby robots detected using the onboard camera to solve the problem. Expand
Survey on Developing Autonomous Micro Aerial Vehicles
As sensors such as Inertial Measurement Unit, cameras, and Light Detection and Rangings have become cheaper and smaller, research has been actively conducted to implement functions automating microExpand
Vision-based Multi-MAV Localization with Anonymous Relative Measurements Using Coupled Probabilistic Data Association Filter
TLDR
This framework fuses the onboard VIO with the anonymous, visual-based robot-to-robot detection to estimate all robot poses in one common frame, addressing three main challenges: the initial configuration of the robot team isunknown, the data association between each vision-based detection and robot targets is unknown, and nonlinear measurements. Expand
Large-scale Autonomous Flight with Real-time Semantic SLAM under Dense Forest Canopy
TLDR
An integrated autonomous flight and semantic SLAM system that can perform long-range missions and real-time semantic mapping in highly cluttered, unstructured, and GPS-denied under-canopy environments is proposed. Expand
Dronument: System for Reliable Deployment of Micro Aerial Vehicles in Dark Areas of Large Historical Monuments
TLDR
This letter presents a self-contained system for robust deployment of autonomous aerial vehicles in environments without access to global navigation systems and with limited lighting conditions, using a unique and reliable aerial platform with a multi-modal lightweight sensory setup to acquire data in human-restricted areas with adverse lighting conditions. Expand
PynqCopter - An Open-source FPGA Overlay for UAVs
TLDR
The result of the experiment is PynqCopter – an open source control system implemented on an FPGA that is able to run multiple computations in parallel, allowing for the ability to process high amounts of data at runtime. Expand
Binarized P-Network: Deep Reinforcement Learning of Robot Control from Raw Images on FPGA
TLDR
This letter proposes a novel DRL algorithm called Binarized P-Network (BPN), which learns image-input control policies using Binarization Convolutional Neural Networks (BCNNs), and adopts a robust value update scheme called Conservative Value Iteration, which is tolerant of function approximation errors. Expand
PRGFlow: Benchmarking SWAP-Aware Unified Deep Visual Inertial Odometry
TLDR
A deep learning approach for visual translation estimation and loosely fuse it with an Inertial sensor for full 6DoF odometry estimation is presented and a detailed benchmark comparing different architectures, loss functions and compression methods to enable scalability is presented. Expand
Optimizing DNN Architectures for High Speed Autonomous Navigation in GPS Denied Environments on Edge Devices
TLDR
This work proposes a novel algorithm to find sparse “sub networks” of existing pre trained models that can complete autonomous navigation missions with speeds upto 4 m/s using the ODROID XU4, which existing state of the art methods fail to do. Expand
Good Feature Matching: Toward Accurate, Robust VO/VSLAM With Low Latency
TLDR
Good feature matching is presented, an active map-to-frame feature matching method that is integrated into monocular and stereo feature-based VSLAM systems and the combination of deterministic selection and randomized acceleration is studied. Expand
...
1
2
...

References

SHOWING 1-10 OF 18 REFERENCES
PIRVS: An Advanced Visual-Inertial SLAM System with Flexible Sensor Fusion and Hardware Co-Design
TLDR
Experimental results demonstrate that the proposed PerceptIn Robotics Vision System system reaches comparable accuracy of state-of-the-art visual-inertial algorithms on PC, while being more efficient on the PIRVS hardware. Expand
Fast, autonomous flight in GPS-denied and cluttered environments
TLDR
Experimental testing reveals that the proposed system design and software architecture can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS-denied environments. Expand
A synchronized visual-inertial sensor system with FPGA pre-processing for accurate real-time SLAM
Robust, accurate pose estimation and mapping at real-time in six dimensions is a primary need of mobile robots, in particular flying Micro Aerial Vehicles (MAVs), which still perform their impressiveExpand
ROS: an open-source Robot Operating System
TLDR
This paper discusses how ROS relates to existing robot software frameworks, and briefly overview some of the available application software which uses ROS. Expand
Robust Stereo Visual Inertial Odometry for Fast Autonomous Flight
TLDR
It is demonstrated that the stereo multistate constraint Kalman filter (S-MSCKF) is comparable to state-of-the-art monocular solutions in terms of computational cost, while providing significantly greater robustness. Expand
Fusion of Stereo-Camera and PMD-Camera Data for Real-Time Suited Precise 3D Environment Reconstruction
  • K. Kuhnert, M. Stommel
  • Computer Science
  • 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems
  • 2006
TLDR
A new technique that combines a stereo-camera system with a PMD-camera is presented, showing that each system compensates effectively for the deficiencies of the other one and is real-time suited. Expand
Search-based motion planning for quadrotors using linear quadratic minimum time control
TLDR
A search-based planning method to compute dynamically feasible trajectories for a quadrotor flying in an obstacle-cluttered environment that does not assume a hovering initial condition and is suitable for fast online re-planning while the robot is moving. Expand
Embedded Real-time Stereo Estimation via Semi-Global Matching on the GPU
TLDR
This work presents a real-time system producing reliable disparity estimation results on the new embedded energy-efficient GPU devices. Expand
Improving quadrotor trajectory tracking by compensating for aerodynamic effects
In this work, we demonstrate that the position tracking performance of a quadrotor may be significantly improved for forward and vertical flight by incorporating simple lumped parameter models forExpand
Fusion of stereo vision and Time-Of-Flight imaging for improved 3D estimation
TLDR
It is shown that in this way, higher spatial resolution is obtained than by only using the TOF camera and higher quality dense stereo disparity maps are the results of this data fusion. Expand
...
1
2
...