Tightly-Coupled Monocular Visual-Odometric SLAM Using Wheels and a MEMS Gyroscope

@article{Quan2019TightlyCoupledMV,
  title={Tightly-Coupled Monocular Visual-Odometric SLAM Using Wheels and a MEMS Gyroscope},
  author={Meixiang Quan and Songhao Piao and Minglang Tan and Shi-Sheng Huang},
  journal={IEEE Access},
  year={2019},
  volume={7},
  pages={97374-97389}
}
In this paper, we present a novel tightly coupled probabilistic monocular visual-odometric simultaneous localization and mapping (VOSLAM) algorithm using wheels and a MEMS gyroscope, which can provide accurate, robust, and long-term localization for ground robots. [] Key Method First, we present a novel odometer preintegration theory on manifold; it integrates the wheel encoder measurements and gyroscope measurements to a relative motion constraint that is independent of the linearization point and carefully…
Consistent Monocular Ackermann Visual–Inertial Odometry for Intelligent and Connected Vehicle Localization
TLDR
MAVIO not only improved the observability of the VIO scale direction under the degenerate motions of ground vehicles, but also resolved the inconsistency problem of the relative kinematic error measurement model of the vehicle to further improve the positioning accuracy.
Visual-Inertial Odometry Tightly Coupled with Wheel Encoder Adopting Robust Initialization and Online Extrinsic Calibration
  • Jinxu Liu, Wei Gao, Zhanyi Hu
  • Computer Science
    2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2019
TLDR
A novel extended visual-inertial odometry algorithm tightly fusing data from the above three sensors is proposed, utilizing complete IMU measurements and wheel encoder readings to make scale estimation more accurate in subsequent 4-degrees of freedom (DoF) optimization.
Learning Wheel Odometry and IMU Errors for Localization
TLDR
This paper leverages recent advances in deep learning and variational inference to correct dynamical and observation models for state-space systems and builds an Extended Kalman Filter on the learned model using wheel speed sensors and the fiber optic gyro for state propagation.
An Enhanced Hybrid Visual–Inertial Odometry System for Indoor Mobile Robot
TLDR
A framework which introduces IMU pre-integration result into MSCKF framework as observation information to improve the system positioning accuracy is proposed and it is proved that the proposed algorithm has better accuracy results while ensuring real-time performance than existing mainstream algorithms.
Localization for Ground Robots: On Manifold Representation, Integration, Re-Parameterization, and Optimization
TLDR
This paper proposes a novel probabilistic framework that is able to use the wheel odometry measurements for high-precision 6D pose estimation, in which only the Wheel odometry and a monocular camera are mandatory.
Visual-Inertial Localization for Skid-Steering Robots with Kinematic Constraints
TLDR
This work derives in a principle way the robot's kinematic constraints based on the instantaneous centers of rotation (ICR) model and integrates them in a tightly-coupled manner into the sliding-window bundle adjustment (BA)-based visual-inertial estimator.
Monocular visual-inertial SLAM algorithm combined with wheel speed anomaly detection
TLDR
This paper uses three methods to detect abnormal chassis movement and analyze chassis movement status in real time, and adopts the Mecanum mobile chassis control method, based on torque control.
Vision-Aided Localization For Ground Robots
TLDR
This paper proposes a novel vision-based localization algorithm dedicatedly designed for ground robots, by fusing measurements from a camera, an IMU, and the wheel odometer, and proposes a complete localization algorithm, by using a sliding-window based estimator.
Visual-Odometric Localization and Mapping for Ground Vehicles Using SE(2)-XYZ Constraints
  • Fan Zheng, Yunhui Liu
  • Engineering
    2019 International Conference on Robotics and Automation (ICRA)
  • 2019
TLDR
A simpler algorithm is proposed that directly parameterizes the ground vehicle poses on SE(2), and a complete visual-odometric localization and mapping system is developed, in a commonly used graph optimization structure.
Plane-Aided Visual-Inertial Odometry for 6-DOF Pose Estimation of a Robotic Navigation Aid
TLDR
A new VIO method is introduced for pose estimation of a robotic navigation aid that uses a 3D time-of-flight camera for assistive navigation and improves the accuracy in estimating the IMU bias and reduces the camera's pose error.
...
1
2
3
...

References

SHOWING 1-10 OF 35 REFERENCES
Visual-Inertial Monocular SLAM With Map Reuse
TLDR
This letter presents a novel tightly coupled visual-inertial simultaneous localization and mapping system that is able to close loops and reuse its map to achieve zero-drift localization in already mapped areas.
VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator
TLDR
This paper presents VINS-Mono: a robust and versatile monocular visual-inertial state estimator that is applicable for different applications that require high accuracy in localization and performs an onboard closed-loop autonomous flight on the microaerial-vehicle platform.
Keyframe-based visual–inertial odometry using nonlinear optimization
TLDR
This work forms a rigorously probabilistic cost function that combines reprojection errors of landmarks and inertial terms and compares the performance to an implementation of a state-of-the-art stochastic cloning sliding-window filter.
High-precision, consistent EKF-based visual-inertial odometry
TLDR
A novel, real-time EKF-based VIO algorithm is proposed, which achieves consistent estimation by ensuring the correct observability properties of its linearized system model, and performing online estimation of the camera-to-inertial measurement unit (IMU) calibration parameters.
IMU Preintegration on Manifold for Efficient Visual-Inertial Maximum-a-Posteriori Estimation
TLDR
This paper addresses the issue of increased computational complexity in monocular visual-inertial navigation by preintegrating inertial measurements between selected keyframes by developing a preintegration theory that properly addresses the manifold structure of the rotation group and carefully deals with uncertainty propagation.
Visual-inertial navigation, mapping and localization: A scalable real-time causal approach
TLDR
An integrated approach to ‘loop-closure’, that is the recognition of previously seen locations and the topological re-adjustment of the traveled path, is described, where loop-closure can be performed without the need to re-compute past trajectories or perform bundle adjustment.
Monocular Visual-Inertial State Estimation for Mobile Augmented Reality
TLDR
This work proposes a tightly-coupled, optimization-based, monocular visual-inertial state estimation for robust camera localization in complex indoor and outdoor environments and develops a lightweight loop closure module that is tightly integrated with the state estimator to eliminate drift.
SVO: Fast semi-direct monocular visual odometry
TLDR
A semi-direct monocular visual odometry algorithm that is precise, robust, and faster than current state-of-the-art methods and applied to micro-aerial-vehicle state-estimation in GPS-denied environments is proposed.
MonoSLAM: Real-Time Single Camera SLAM
TLDR
The first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to structure from motion approaches is presented.
Bi-Objective Bundle Adjustment With Application to Multi-Sensor SLAM
TLDR
Results show that the inertial integration with an automatic weighting method decreases the drift on the final localization.
...
1
2
3
4
...