A multimodal vision sensor for autonomous driving

@inproceedings{Sun2019AMV,
  title={A multimodal vision sensor for autonomous driving},
  author={Dongming Sun and Xiao Huang and Kailun Yang},
  booktitle={Security + Defence},
  year={2019}
}
  • Dongming Sun, Xiao Huang, Kailun Yang
  • Published in Security + Defence 2019
  • Computer Science, Engineering
  • This paper describes a multimodal vision sensor that integrates three types of cameras, including a stereo camera, a polarization camera and a panoramic camera. [...] Key Method Designed especially for autonomous driving, this vision sensor is shipped with a robust semantic segmentation network. In addition, we demonstrate how cross-modal enhancement could be achieved by registering the color image and the polarization image. An example of water hazard detection is given. To prove the multimodal vision sensor’s…Expand Abstract

    Create an AI-powered research feed to stay up to date with new papers like this posted to ArXiv

    Citations

    Publications citing this paper.

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 24 REFERENCES

    Fusing semantic labeled camera images and 3D LiDAR data for the detection of urban curbs

    Perception Framework of Water Hazards Beyond Traversability for Real-World Navigation Assistance Systems

    Semantic perception of curbs beyond traversability for real-world navigation assistance systems

    Super-sensor for 360-degree environment perception: Point cloud segmentation using image features