Corpus ID: 236087462

Woodscape Fisheye Semantic Segmentation for Autonomous Driving - CVPR 2021 OmniCV Workshop Challenge

@article{Ramachandran2021WoodscapeFS,
  title={Woodscape Fisheye Semantic Segmentation for Autonomous Driving - CVPR 2021 OmniCV Workshop Challenge},
  author={Saravanabalagi Ramachandran and Ganesh Sistu and John B. McDonald and Senthil Kumar Yogamani},
  journal={ArXiv},
  year={2021},
  volume={abs/2107.08246}
}
We present the WoodScape fisheye semantic segmentation challenge for autonomous driving which was held as part of the CVPR 2021 Workshop on Omnidirectional Computer Vision (OmniCV). This challenge is one of the first opportunities for the research community to evaluate the semantic segmentation techniques targeted for fisheye camera perception. Due to strong radial distortion standard models don’t generalize well to fisheye images and hence the deformations in the visual appearance of objects… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 32 REFERENCES
WoodScape: A Multi-Task, Multi-Camera Fisheye Dataset for Autonomous Driving
TLDR
The first extensive fisheye automotive dataset, WoodScape, named after Robert Wood, which comprises of four surround view cameras and nine tasks including segmentation, depth estimation, 3D bounding box detection and soiling detection is released. Expand
Generalized Object Detection on Fisheye Cameras for Autonomous Driving: Dataset, Representations and Baseline
TLDR
This work designs a novel curved bounding box model that has optimal properties for fisheye distortion models and designs a curvature adaptive perimeter sampling method for obtaining polygon vertices, improving relative mAP score by 4.9% compared to uniform sampling. Expand
The OmniScape Dataset
TLDR
The proposed framework for generating omnidirectional images using images that are acquired from a virtual environment is presented and the generated OmniScape dataset is explained, which includes stereo fisheye and catadioptric images acquired from the two front sides of a motorcycle. Expand
DeepTrailerAssist: Deep Learning Based Trailer Detection, Tracking and Articulation Angle Estimation on Automotive Rear-View Camera
  • Ashok Dahal, J. Hossen, +5 authors D. Troy
  • Computer Science
  • 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)
  • 2019
TLDR
This work presents all the trailer assist use cases in detail and proposes a deep learning based solution for trailer perception problems using a proprietary dataset comprising of 11 different trailer types to achieve a reasonable detection accuracy. Expand
Pyramid Scene Parsing Network
TLDR
This paper exploits the capability of global context information by different-region-based context aggregation through the pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet) to produce good quality results on the scene parsing task. Expand
Deep High-Resolution Representation Learning for Visual Recognition
TLDR
The superiority of the proposed HRNet in a wide range of applications, including human pose estimation, semantic segmentation, and object detection, is shown, suggesting that the HRNet is a stronger backbone for computer vision problems. Expand
1 year, 1000 km: The Oxford RobotCar dataset
TLDR
By frequently traversing the same route over the period of a year, this dataset enables research investigating long-term localization and mapping for autonomous vehicles in real-world, dynamic urban environments to be investigated. Expand
Near-field Sensing Architecture for Low-Speed Vehicle Automation using a Surround-view Fisheye Camera System
TLDR
This work describes the visual perception architecture on surround view cameras designed for a system deployed in commercial vehicles, provides a functional review of the different stages of such a computer vision system, and discusses some of the current technological challenges. Expand
Momentum Contrast for Unsupervised Visual Representation Learning
We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and aExpand
Deep Residual Learning for Image Recognition
TLDR
This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. Expand
...
1
2
3
4
...