Corpus ID: 232478660

Full Surround Monodepth from Multiple Cameras

@article{Guizilini2021FullSM,
  title={Full Surround Monodepth from Multiple Cameras},
  author={V. Guizilini and Igor Vasiljevic and Rares Ambrus and G. Shakhnarovich and Adrien Gaidon},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.00152}
}
Self-supervised monocular depth and ego-motion estimation is a promising approach to replace or supplement expensive depth sensors such as LiDAR for robotics applications like autonomous driving. However, most research in this area focuses on a single monocular camera or stereo pairs that cover only a fraction of the scene around the vehicle. In this work, we extend monocular self-supervised depth and ego-motion estimation to large-baseline multicamera rigs. Using generalized spatio-temporal… Expand

References

SHOWING 1-10 OF 53 REFERENCES
Digging Into Self-Supervised Monocular Depth Estimation
Neural Ray Surfaces for Self-Supervised Learning of Depth and Ego-motion
Unsupervised Learning of Depth and Ego-Motion from Monocular Video Using 3D Geometric Constraints
Semi-Supervised Deep Learning for Monocular Depth Map Prediction
360SD-Net: 360° Stereo Depth Estimation with Learnable Cost Volume
Depth From Videos in the Wild: Unsupervised Monocular Depth Learning From Unknown Cameras
SuperDepth: Self-Supervised, Super-Resolved Monocular Depth Estimation
...
1
2
3
4
5
...