Alejandro Rituerto

Learn More
An important part of current research on appearance based mapping goes towards richer semantic representations of the environment, which may allow autonomous systems to perform higher level tasks and provide better human-robot interaction. This work presents a new omnidirectional vision based scene labeling approach for augmented indoor topological mapping.(More)
The SLAM (Simultaneous Localization and Mapping) problem is one of the essential challenges for the current robotics. Our main objective in this work is to develop a real-time visual SLAM system using monocular omnidi-rectional vision. Our approach is based on the Extended Kalman Filter (EKF). We use the Spherical Camera Model to obtain geometric(More)
Wearable computer vision systems provide plenty of opportunities to develop human assistive devices. This work contributes on visual scene understanding techniques using a helmet-mounted omnidirectional vision system. The goal is to extract semantic information of the environment, such as the type of environment being traversed or the basic 3D layout of the(More)
Autonomous navigation and recognition of the environment are fundamental abilities for people extensively studied in computer vision and robotics fields. Expansion of low cost wearable sensing provides interesting opportunities for assistance systems that augment people navigation and recognition capabilities. This work presents our wearable omnidirectional(More)
Intelligent autonomous systems need complex and detailed models of their environment to achieve sophisticated tasks. Vision sensors provide rich information and are broadly used to obtain or improve these models. The particular case of indoor scene understanding from monocular images has been widely studied, and a common initial step to solve this problem(More)
Mobile robots are of great help for automatic monitoring tasks in different environments. One of the first tasks that needs to be addressed when creating these kinds of robotic systems is modeling the robot environment. This work proposes a pipeline to build an enhanced visual model of a robot environment indoors. Vision based recognition approaches(More)
Intelligent systems need complex and detailed models of their environment to achieve more sophisticated tasks, such as assistance to the user. Vision sensors provide rich information and are broadly used to obtain these models, for example, indoor scene modeling from monoc-ular images has been widely studied. A common initial step in those settings is the(More)
Scene understanding is a widely studied problem in computer vision. Many works approach this problem in indoor environments assuming constraints about the scene, such as the typical Manhattan World assumption. The goal of this work is to design and evaluate a global descriptor for indoor panoramic images that encloses information about the 3D structure.(More)
  • 1