Corpus ID: 131882820

Towards 3D reconstruction of outdoor scenes by mmw radar and a vision sensor fusion

@inproceedings{Natour2016Towards3R,
  title={Towards 3D reconstruction of outdoor scenes by mmw radar and a vision sensor fusion},
  author={Ghina El Natour},
  year={2016}
}
The main goal of this PhD work is to develop 3D mapping methods of large scale environment by combining panoramic radar and cameras. Unlike existing sensor fusion methods, such as SLAM (simultaneous localization and mapping), we want to build a RGB-D sensor which directly provides depth measurement enhanced with texture and color information.After modeling the geometry of the radar/camera system, we propose a novel calibration method using points correspondences. To obtain these points… Expand
1 Citations
Agrégation d'information pour la localisation d'un robot mobile sur une carte imparfaite. (Information aggregation for the localization of a mobile robot using a non-perfect map)
TLDR
Nous souhaitons lever ces limitations, et proposons d'utiliser des cartes de type semantique, qui existent au-prealable, par exemple comme OpenStreetMap, comme carte de base afin of se localiser. Expand

References

SHOWING 1-10 OF 105 REFERENCES
Toward 3D Reconstruction of Outdoor Scenes Using an MMW Radar and a Monocular Vision Sensor
TLDR
A geometric method for 3D reconstruction of the exterior environment using a panoramic microwave radar and a camera based on the complementarity of these two sensors considering the robustness to the environmental conditions and depth detection ability of the radar, and the high spatial resolution of a vision sensor. Expand
Radar and vision sensors calibration for outdoor 3D reconstruction
TLDR
A new geometric calibration algorithm, and a geometric method of 3D reconstruction using a panoramic microwave radar and a camera that is complementary, considering the robustness to environmental conditions and depth detection ability of the radar, and the high spatial resolution of a vision sensor. Expand
Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction
TLDR
A sensor fusion system of cameras and a 2D laser sensor for large-scale 3D reconstruction, designed to capture data on a fast-moving ground vehicle, and the problem of error accumulation is solved by loop closing, not by GPS. Expand
Digitizing and 3D modeling of urban environments and roads using vehicle-borne laser scanner system
TLDR
This paper proposes a high level representation of the urban scene while identifying automatically and in real time some types of existing objects in this environment by reconstructing the 3D geometry of the environment by real-time geo-referencing. Expand
Reconstructing a textured CAD model of an urban environment using vehicle-borne laser range scanners and line cameras
TLDR
A novel method is presented for generating a textured CAD model of an outdoor urban environment using a vehicle-borne sensor system and an outdoor experiment is conducted, and the model is reconstructed in a full automatic mode. Expand
Robust 3D reconstruction using LiDAR and N - visual image
TLDR
A method is introduced which uses one LiDAR image and N conventional visual images to reduce the error and to build a robust registration for 3D reconstruction and demonstrates the method on a synthetic model which is an idealized representation of an urban environment. Expand
Distributed multi sensor data fusion for autonomous 3D mapping
TLDR
An autonomous platform capable of generating 3D imagery of the environment in unknown indoor and outdoor contexts is described, composed of a number of Data Fusion processes that are performed in real-time by on-board and/or off-board processing nodes. Expand
Parameters Separated Calibration Based on Particle Swarm Optimization for a Camera and a Laser-Rangefinder Information Fusion
Heterogeneous sensors fusion of a camera and a laser-rangefinder can greatly improve the environment perception ability, and its primary problem is the calibration of depth scan and imageExpand
Accurate Multiple View 3D Reconstruction Using Patch-Based Stereo for Large-Scale Scenes
  • Shuhan Shen
  • Medicine, Computer Science
  • IEEE Transactions on Image Processing
  • 2013
TLDR
A depth-map merging based multiple view stereo method for large-scale scenes which takes both accuracy and efficiency into account and can reconstruct quite accurate and dense point clouds with high computational efficiency. Expand
Integration of LiDAR data and optical multi-view images for 3D reconstruction of building roofs
TLDR
Experimental results indicate that the proposed approach to automatic reconstruction of 3D building roof models can provide high-quality 3D roof models with diverse roof structure complexities. Expand
...
1
2
3
4
5
...