Learn More
Experimental and molecular modeling studies are conducted to investigate the underlying mechanisms for the high solubility of CO2 in imidazolium-based ionic liquids. CO2 absorption isotherms at 10, 25, and 50 degrees C are reported for six different ionic liquids formed by pairing three different anions with two cations that differ only in the nature of the(More)
Simultaneous localization and mapping (SLAM) consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications and witnessing a steady transition of this(More)
In this paper we show how to carry out robust place recognition using both near and far information provided by a stereo camera. Visual appearance is known to be very useful in place recognition tasks. In recent years, it has been shown that taking geometric information also into account further improves system robustness. Stereo visual systems provide 3D(More)
We report brain electrophysiological responses from 10- to 13-month-old Mexican infants while listening to native and foreign CV-syllable contrasts differing in Voice Onset Time (VOT). All infants showed normal auditory event-related potential (ERP) components. Our analyses showed ERP evidence that Mexican infants are capable of discriminating their native(More)
We propose a semantic scene understanding system that is suitable for real robotic operations. The system solves different tasks (semantic segmentation and object detections) in an opportunistic and distributed fashion but still allows communication between modules to improve their respective performances. We propose the use of the semantic space to improve(More)
—We explore the capabilities of Auto-Encoders to fuse the information available from cameras and depth sensors, and to reconstruct missing data, for scene understanding tasks. In particular we consider three input modalities: RGB images; depth images; and semantic label information. We seek to generate complete scene segmentations and depth maps, given(More)
— Learning from demonstration for motion planning is an ongoing research topic. In this paper we present a model that is able to learn the complex mapping from raw 2D-laser range findings and a target position to the required steering commands for the robot. To our best knowledge, this work presents the first approach that learns a target-oriented(More)