Object SLAM-Based Active Mapping and Robotic Grasping

@article{Wu2021ObjectSA,
  title={Object SLAM-Based Active Mapping and Robotic Grasping},
  author={Yanmin Wu and Yunzhou Zhang and Delong Zhu and Xin Chen and S. Coleman and Wenkai Sun and Xinggang Hu and Zhiqiang Deng},
  journal={2021 International Conference on 3D Vision (3DV)},
  year={2021},
  pages={1372-1381}
}
This paper presents the first active object mapping framework for complex robotic manipulation and autonomous perception tasks. The framework is built on an object SLAM system integrated with a simultaneous multi-object pose estimation process that is optimized for robotic grasping. Aiming to reduce the observation uncertainty on target objects and increase their pose estimation accuracy, we also design an object-driven exploration strategy to guide the object mapping process, enabling… 

Figures and Tables from this paper

Next-Best-View Prediction for Active Stereo Cameras and Highly Reflective Objects
TLDR
This work proposes a next-best-view framework to strategically select camera viewpoints for completing depth data on reflective objects based on the Phong reflection model and a photometric response function, and implements an active perception pipeline which is evaluated on a challenging realworld dataset.

References

SHOWING 1-10 OF 41 REFERENCES
CubeSLAM: Monocular 3-D Object SLAM
TLDR
The SLAM method achieves the state-of-the-art monocular camera pose estimation and at the same time, improves the 3-D object detection accuracy.
QuadricSLAM: Dual Quadrics From Object Detections as Landmarks in Object-Oriented SLAM
TLDR
A sensor model for object detectors is developed that addresses the challenge of partially visible objects, and it is demonstrated how to jointly estimate the camera pose and constrained dual quadric parameters in factor graph based SLAM with a general perspective camera.
EAO-SLAM: Monocular Semi-Dense Object SLAM Based on Ensemble Data Association
TLDR
This work proposes an ensemble data associate strategy for integrating the parametric and nonparametric statistic tests and presents an accurate object pose estimation framework, in which an outliers-robust centroid and scale estimation algorithm and an object pose initialization algorithm are developed to help improve the optimality of pose estimation results.
Goal-Driven Autonomous Mapping Through Deep Reinforcement Learning and Planning-Based Navigation
TLDR
A navigation policy is learned through a deep reinforcement learning (DRL) framework in a simulated environment and integrated into a motion planning stack as the local navigation layer to move the robot towards the intermediate goals.
NeuralRecon: Real-Time Coherent 3D Reconstruction from Monocular Video
TLDR
To the best of the knowledge, this is the first learning-based system that is able to reconstruct dense coherent 3D geometry in real-time and outperforms state-of-the-art methods in terms of both ac-curacy and speed.
Active SLAM for Mobile Robots With Area Coverage and Obstacle Avoidance
In this article, we present an active simultaneous localization and mapping (SLAM) framework for a mobile robot to obtain a collision-free trajectory with good performance in SLAM uncertainty
Antipodal Robotic Grasping using Generative Residual Convolutional Neural Network
TLDR
A novel Generative Residual Convolutional Neural Network (GR-ConvNet) model that can generate robust antipodal grasps from n-channel input at real-time speeds (∼20ms) is proposed.
CosyPose: Consistent multi-view multi-object 6D pose estimation
TLDR
The proposed method, dubbed CosyPose, outperforms current state-of-the-art results for single-view and multi-view 6D object pose estimation by a large margin on two challenging benchmarks: the YCB-Video and T-LESS datasets.
Coverage trajectory planning for a bush trimming robot arm
TLDR
A novel motion planning algorithm for robotic bush trimming based on an optimal route search over a graph that entails both accuracy in the surface sweeping task and smoothness in the motion of the robot arm is presented.
GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping
TLDR
This work contributes a large-scale grasp pose detection dataset with a unified evaluation system and proposes an end-to-end grasp pose prediction network given point cloud inputs, where the network learns approaching direction and operation parameters in a decoupled manner.
...
1
2
3
4
5
...