Cesar Cadena

Learn More
Simultaneous localization and mapping (SLAM) consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications and witnessing a steady transition of this(More)
We explore the capabilities of Auto-Encoders to fuse the information available from cameras and depth sensors, and to reconstruct missing data, for scene understanding tasks. In particular we consider three input modalities: RGB images; depth images; and semantic label information. We seek to generate complete scene segmentations and depth maps, given(More)
Experimental and molecular modeling studies are conducted to investigate the underlying mechanisms for the high solubility of CO2 in imidazolium-based ionic liquids. CO2 absorption isotherms at 10, 25, and 50 degrees C are reported for six different ionic liquids formed by pairing three different anions with two cations that differ only in the nature of the(More)
We propose a semantic scene understanding system that is suitable for real robotic operations. The system solves different tasks (semantic segmentation and object detections) in an opportunistic and distributed fashion but still allows communication between modules to improve their respective performances. We propose the use of the semantic space to improve(More)
Loop-closure detection on 3D data is a challenging task that has been commonly approached by adapting imagebased solutions. Methods based on local features suffer from ambiguity and from robustness to environment changes while methods based on global features are viewpoint dependent. We propose SegMatch, a reliable loop-closure detection algorithm based on(More)
In this paper we show how to carry out robust place recognition using both near and far information provided by a stereo camera. Visual appearance is known to be very useful in place recognition tasks. In recent years, it has been shown that taking geometric information also into account further improves system robustness. Stereo visual systems provide 3D(More)
Learning from demonstration for motion planning is an ongoing research topic. In this paper we present a model that is able to learn the complex mapping from raw 2D-laser range findings and a target position to the required steering commands for the robot. To our best knowledge, this work presents the first approach that learns a target-oriented end-to-end(More)