Corpus ID: 203737335

Two Stream Networks for Self-Supervised Ego-Motion Estimation

@inproceedings{Ambrus2019TwoSN,
  title={Two Stream Networks for Self-Supervised Ego-Motion Estimation},
  author={Rares Ambrus and V. Guizilini and J. Li and S. Pillai and Adrien Gaidon},
  booktitle={CoRL},
  year={2019}
}
Learning depth and camera ego-motion from raw unlabeled RGB video streams is seeing exciting progress through self-supervision from strong geometric cues. To leverage not only appearance but also scene geometry, we propose a novel self-supervised two-stream network using RGB and inferred depth information for accurate visual odometry. In addition, we introduce a sparsity-inducing data augmentation policy for ego-motion learning that effectively regularizes the pose network to enable stronger… Expand
3D Packing for Self-Supervised Monocular Depth Estimation
Self-Supervised Learning of Visual Odometry
  • Lesheng Song, Wan Luo
  • Computer Science
  • 2020 International Conference on Information Science, Parallel and Distributed Systems (ISPDS)
  • 2020

References

SHOWING 1-10 OF 57 REFERENCES
GeoNet: Unsupervised Learning of Dense Depth, Optical Flow and Camera Pose
Towards visual ego-motion learning in robots
  • S. Pillai, J. Leonard
  • Computer Science, Engineering
  • 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
  • 2017
Beyond Photometric Loss for Self-Supervised Ego-Motion Estimation
Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints
3D Packing for Self-Supervised Monocular Depth Estimation
Unsupervised Learning of Depth and Ego-Motion from Video
Every Pixel Counts ++: Joint Learning of Geometry and Motion with 3D Holistic Understanding
...
1
2
3
4
5
...