PoseConvGRU: A Monocular Approach for Visual Ego-motion Estimation by Learning

@article{Zhai2020PoseConvGRUAM,
  title={PoseConvGRU: A Monocular Approach for Visual Ego-motion Estimation by Learning},
  author={G. Zhai and L. Liu and Linjian Zhang and Y. Liu},
  journal={ArXiv},
  year={2020},
  volume={abs/1906.08095}
}
  • G. Zhai, L. Liu, +1 author Y. Liu
  • Published 2020
  • Computer Science, Mathematics, Engineering
  • ArXiv
  • While many visual ego-motion algorithm variants have been proposed in the past decade, learning based ego-motion estimation methods have seen an increasing attention because of its desirable properties of robustness to image noise and camera calibration independence. In this work, we propose a data-driven approach of fully trainable visual ego-motion estimation for a monocular camera. We use an end-to-end learning approach in allowing the model to map directly from input image pairs to an… CONTINUE READING

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 47 REFERENCES
    Exploring Representation Learning With CNNs for Frame-to-Frame Ego-Motion Estimation
    • 116
    • PDF
    Modelling uncertainty in deep learning for camera relocalization
    • 276
    • PDF
    DeepVO: Towards end-to-end visual odometry with deep Recurrent Convolutional Neural Networks
    • 264
    • Highly Influential
    • PDF
    LS-VO: Learning Dense Optical Subspace for Robust Visual Odometry Estimation
    • 27
    • PDF
    PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization
    • 856
    • Highly Influential
    • PDF