Learning Precise 3D Manipulation from Multiple Uncalibrated Cameras

@article{Akinola2020LearningP3,
  title={Learning Precise 3D Manipulation from Multiple Uncalibrated Cameras},
  author={Iretiayo Akinola and J. Varley and D. Kalashnikov},
  journal={2020 IEEE International Conference on Robotics and Automation (ICRA)},
  year={2020},
  pages={4616-4622}
}
  • Iretiayo Akinola, J. Varley, D. Kalashnikov
  • Published 2020
  • Computer Science
  • 2020 IEEE International Conference on Robotics and Automation (ICRA)
  • In this work, we present an effective multi-view approach to closed-loop end-to-end learning of precise manipulation tasks that are 3D in nature. Our method learns to accomplish these tasks using multiple statically placed but uncalibrated RGB camera views without building an explicit 3D representation such as a pointcloud or voxel grid. This multi-camera approach achieves superior task performance on difficult stacking and insertion tasks compared to single-view baselines. Single view robotic… CONTINUE READING
    3 Citations

    References

    SHOWING 1-10 OF 39 REFERENCES
    Shape completion enabled robotic grasping
    • 125
    • PDF
    Learning Pose Estimation for High-Precision Robotic Assembly Using Simulated Depth Images
    • 8
    • PDF
    kPAM: KeyPoint Affordances for Category-Level Robotic Manipulation
    • 44
    • PDF
    Deep Object Pose Estimation for Semantic Robotic Grasping of Household Objects
    • 191
    • PDF
    Learning 6-DOF Grasping Interaction via Deep Geometry-Aware 3D Representations
    • Xinchen Yan, Jasmine Hsu, +5 authors H. Lee
    • Computer Science, Engineering
    • 2018 IEEE International Conference on Robotics and Automation (ICRA)
    • 2018
    • 38
    Time-Contrastive Networks: Self-Supervised Learning from Video
    • 232
    • PDF
    Multi-Modal Geometric Learning for Grasping and Manipulation
    • 22
    • PDF
    Closing the Loop for Robotic Grasping: A Real-time, Generative Grasp Synthesis Approach
    • 144
    • PDF
    Grasp Pose Detection in Point Clouds
    • 158
    • PDF
    Tracking objects with point clouds from vision and touch
    • 41
    • PDF