VIPose: Real-time Visual-Inertial 6D Object Pose Tracking

@article{Ge2021VIPoseRV,
  title={VIPose: Real-time Visual-Inertial 6D Object Pose Tracking},
  author={Rundong Ge and Giuseppe Loianno},
  journal={2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)},
  year={2021},
  pages={4597-4603}
}
  • Rundong GeGiuseppe Loianno
  • Published 27 July 2021
  • Computer Science
  • 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Estimating the 6D pose of objects is beneficial for robotics tasks such as transportation, autonomous navigation, manipulation as well as in scenarios beyond robotics like virtual and augmented reality. With respect to single image pose estimation, pose tracking takes into account the temporal information across multiple frames to overcome possible detection inconsistencies and to improve the pose estimation efficiency. In this work, we introduce a novel Deep Neural Network (DNN) called VIPose… 

Figures and Tables from this paper

HFF6D: Hierarchical Feature Fusion Network for Robust 6D Object Pose Tracking

Quantitative and qualitative results demonstrate that HFF6D outperforms state-of-the-art (SOTA) methods in both accuracy and efficiency, and is also proved to achieve high-robustness tracking under the above-mentioned challenging scenes.

Real-Time Physics-Based Object Pose Tracking during Non-Prehensile Manipulation

This work proposes a method to track the 6D pose of an object over time, while the object is under non-prehensile manipulation by a robot, using a particle filtering approach to combine the control information with the visual information.

Vision-based Relative Detection and Tracking for Teams of Micro Aerial Vehicles

The proposed perception and inference pipeline which includes a Deep Neural Network (DNN) as visual target detector is lightweight and capable of concurrently running control and planning with Size, Weight, and Power constrained robots on-board.

A Flexible-Frame-Rate Vision-Aided Inertial Object Tracking System for Mobile Devices

Both simulations and real world experiments show that the method achieves accurate and robust object tracking on low-end devices.

References

SHOWING 1-10 OF 27 REFERENCES

se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains

This work proposes a data-driven optimization approach for long-term, 6D pose tracking, which aims to identify the optimal relative pose given the current RGB-D observation and a synthetic image conditioned on the previous best estimate and the object’s model.

PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes

This work introduces PoseCNN, a new Convolutional Neural Network for 6D object pose estimation, which is highly robust to occlusions, can handle symmetric objects, and provide accurate pose estimation using only color images as input.

Real-Time Seamless Single Shot 6D Object Pose Prediction

A single-shot approach for simultaneously detecting an object in an RGB image and predicting its 6D pose without requiring multiple stages or having to examine multiple hypotheses is proposed, which substantially outperforms other recent CNN-based approaches when they are all used without postprocessing.

CDPN: Coordinates-Based Disentangled Pose Network for Real-Time RGB-Based 6-DoF Object Pose Estimation

This work proposes a novel 6-DoF pose estimation approach: Coordinates-based Disentangled Pose Network (CDPN), which disentangles the pose to predict rotation and translation separately to achieve highly accurate and robust pose estimation.

PoseRBPF: A Rao–Blackwellized Particle Filter for 6-D Object Pose Tracking

This work forms the 6D object pose tracking problem in the Rao-Blackwellized particle filtering framework, where the 3D rotation and the3D translation of an object are decoupled, and achieves state-of-the-art results on two 6D pose estimation benchmarks.

Deep Model-Based 6D Pose Refinement in RGB

A new visual loss is proposed that drives the pose update by aligning object contours, thus avoiding the definition of any explicit appearance model and producing pose accuracies that come close to 3D ICP without the need for depth data.

DeepIM: Deep Iterative Matching for 6D Pose Estimation

A novel deep neural network for 6D pose matching named DeepIM is proposed, trained to predict a relative pose transformation using a disentangled representation of 3D location and 3D orientation and an iterative training process.

Multi-view self-supervised deep learning for 6D pose estimation in the Amazon Picking Challenge

This paper proposes a self-supervised method to generate a large labeled dataset without tedious manual segmentation and demonstrates that the system can reliably estimate the 6D pose of objects under a variety of scenarios.

VINet: Visual-Inertial Odometry as a Sequence-to-Sequence Learning Problem

This paper presents an on-manifold sequence-to-sequence learning approach to motion estimation using visual and inertial sensors that eliminates the need for tedious manual synchronization of the camera and IMU and can be trained to outperform state-of-the-art methods in the presence of calibration and synchronization errors.

The MOPED framework: Object recognition and pose estimation for manipulation

We present MOPED, a framework for Multiple Object Pose Estimation and Detection that seamlessly integrates single-image and multi-image object recognition and pose estimation in one optimized,