VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator

@article{Qin2017VINSMonoAR,
  title={VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator},
  author={Tong Qin and Peiliang Li and Shaojie Shen},
  journal={IEEE Transactions on Robotics},
  year={2017},
  volume={34},
  pages={1004-1020}
}
One camera and one low-cost inertial measurement unit (IMU) form a monocular visual-inertial system (VINS), which is the minimum sensor suite (in size, weight, and power) for the metric six degrees-of-freedom (DOF) state estimation. [] Key Method A tightly coupled, nonlinear optimization-based method is used to obtain highly accurate visual-inertial odometry by fusing preintegrated IMU measurements and feature observations.

VINS-MKF: A Tightly-Coupled Multi-Keyframe Visual-Inertial Odometry for Accurate and Robust State Estimation

This paper presents a novel tightly-coupled multi-keyframe visual-inertial odometry (called VINS-MKF), which can provide an accurate and robust state estimation for robots in an indoor environment.

Relocalization, Global Optimization and Map Merging for Monocular Visual-Inertial SLAM

A monocular visual-inertial SLAM system, which can relocalize camera and get the absolute pose in a previous-built map and can reuse a map by saving and loading it in an efficient way and validate the accuracy on public datasets and compare against other state-of-the-art algorithms.

A Robust Visual Inertial Navigation System Based on Low-cost Inertial Measurement Unit

A visual-inertial system (VINS) based on vision sensor is vulnerable to environment illumination and texture, the problem of initial scale ambiguity still exists in a monocular VINS system. The

A Versatile Keyframe-Based Structureless Filter for Visual Inertial Odometry

Tests confirm that KSF reliably calibrates sensor parameters when the data contain adequate motion, and consistently estimate motion with accuracy rivaling recent VIO methods.

Rolling-Shutter Modelling for Direct Visual-Inertial Odometry

A rolling-shutter model is incorporated into the photometric bundle adjustment that estimates a set of recent keyframe poses and the inverse depth of a sparse set of points in a direct visual-inertial odometry method which estimates the motion of the sensor setup and sparse 3D geometry of the environment based on measurements from a rolling-shoot camera and an inertial measurement unit.

Visual-inertial odometry based on tightly-coupled encoder

A Visual-Inertial-Encoder Tightly-Coupled Odometry (VIETO) algorithm is presented, and VIETO initialization as an optimal estimation problem in the sense of maximum-a-posteriori (MAP) estimation is described.

Robust and Efficient Visual-Inertial Odometry with Multi-plane Priors

A novel monocular visual-inertial odometry system which leverages multi-plane priors and a novel visual- inertial-plane PnP algorithm is introduced to use plane information for fast localization.

Multi-Camera Visual-Inertial Navigation with Online Intrinsic and Extrinsic Calibration

This paper presents a general multi-camera visual-inertial navigation system (mc-VINS) with online instrinsic and extrinsic calibration, which is able to utilize all the information from an arbitrary number of asynchronous cameras, and performs online sensor calibration of each camera’s intrinsics as well as the spatial and temporal extrinsics parameters between all involved sensors, thus enabling high-fidelity localization.

DVIO: An Optimization-Based Tightly Coupled Direct Visual-Inertial Odometry

A novel optimization-based tightly coupled Direct Visual-Inertial Odometry (DVIO), which fuses the visual and inertial measurements to provide real-time full state estimation and is highly applicable to the navigation or the simultaneous localization and mapping of mobile devices or agile robots like micro air vehicles.

Modeling Varying Camera-IMU Time Offset in Optimization-Based Visual-Inertial Odometry

This work proposes a nonlinear optimization-based monocular visual inertial odometry (VIO) with varying camera-IMU time offset modeled as an unknown variable that is able to handle the rolling-shutter effects and imperfect sensor synchronization in a unified way.
...

References

SHOWING 1-10 OF 50 REFERENCES

Relocalization, Global Optimization and Map Merging for Monocular Visual-Inertial SLAM

A monocular visual-inertial SLAM system, which can relocalize camera and get the absolute pose in a previous-built map and can reuse a map by saving and loading it in an efficient way and validate the accuracy on public datasets and compare against other state-of-the-art algorithms.

Robust initialization of monocular visual-inertial estimation on aerial robots

  • Tong QinS. Shen
  • Computer Science
    2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2017
A robust on-the-fly estimator initialization algorithm to provide high-quality initial states for monocular visual-inertial systems (VINS) and makes the implementation open source, which is the initialization part integrated in the VINS-Mono1.

Robust visual inertial odometry using a direct EKF-based approach

A monocular visual-inertial odometry algorithm which achieves accurate tracking performance while exhibiting a very high level of robustness by directly using pixel intensity errors of image patches, leading to a truly power-up-and-go state estimation system.

Monocular Visual-Inertial State Estimation for Mobile Augmented Reality

This work proposes a tightly-coupled, optimization-based, monocular visual-inertial state estimation for robust camera localization in complex indoor and outdoor environments and develops a lightweight loop closure module that is tightly integrated with the state estimator to eliminate drift.

Monocular Visual–Inertial State Estimation With Online Initialization and Camera–IMU Extrinsic Calibration

  • Zhenfei YangS. Shen
  • Computer Science
    IEEE Transactions on Automation Science and Engineering
  • 2017
This paper proposes a methodology that is able to initialize velocity, gravity, visual scale, and camera–IMU extrinsic calibration on the fly and shows through online experiments that this method leads to accurate calibration of camera-IMU transformation, with errors less than 0.02 m in translation and 1° in rotation.

IMU Preintegration on Manifold for Efficient Visual-Inertial Maximum-a-Posteriori Estimation

This paper addresses the issue of increased computational complexity in monocular visual-inertial navigation by preintegrating inertial measurements between selected keyframes by developing a preintegration theory that properly addresses the manifold structure of the rotation group and carefully deals with uncertainty propagation.

On-Manifold Preintegration for Real-Time Visual--Inertial Odometry

The preintegrated inertial measurement unit model can be seamlessly integrated into a visual--inertial pipeline under the unifying framework of factor graphs and the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation.

Initialization-Free Monocular Visual-Inertial State Estimation with Application to Autonomous MAVs

This paper presents a monocular visual-inertial system (VINS) for an autonomous quadrotor which relies only on an inexpensive off-the-shelf camera and IMU, and describes a robust state estimator which allows the robot to execute trajectories at 2 m/s with roll and pitch angles of 20 degrees.

Visual-Inertial Monocular SLAM With Map Reuse

This letter presents a novel tightly coupled visual-inertial simultaneous localization and mapping system that is able to close loops and reuse its map to achieve zero-drift localization in already mapped areas.

Real-time onboard visual-inertial state estimation and self-calibration of MAVs in unknown environments

This paper proposes a navigation algorithm for MAVs equipped with a single camera and an Inertial Measurement Unit (IMU) which is able to run onboard and in real-time, and proposes a speed-estimation module which converts the camera into a metric body-speed sensor using IMU data within an EKF framework.