Gabriele Costante

Learn More
Visual ego-motion estimation, or briefly visual odometry (VO), is one of the key building blocks of modern SLAM systems. In the last decade, impressive results have been demonstrated in the context of visual navigation, reaching very high localization performance. However, all ego-motion estimation systems require careful parameter tuning procedures for the(More)
Obstacle Detection is a central problem for any robotic system, and critical for autonomous systems that travel at high speeds in unpredictable environment. This is often achieved through scene depth estimation, by various means. When fast motion is considered, the detection range must be longer enough to allow for safe avoidance and path planning. Current(More)
Visual Odometry (VO) is one of the fundamental building blocks of modern autonomous robot navigation and mapping. While most state-of-the-art techniques use geometrical methods for camera ego-motion estimation from optical flow vectors, in the last few years learning approaches have been proposed to solve this problem. These approaches are emerging and(More)
The widespread adoption of mobile devices has lead to an increased interest toward smartphone-based solutions for supporting visually impaired users. Unfortunately the touch-based interaction paradigm commonly adopted on most devices is not convenient for these users, motivating the study of different interaction technologies. In this paper, following up on(More)
As researchers are striving for developing robotic systems able to move into the `the wild', the interest towards novel learning paradigms for domain adaptation has increased. In the specific application of semantic place recognition from cameras, supervised learning algorithms are typically adopted. However, once learning has been performed, if the robot(More)
The place recognition module is a fundamental component in SLAM systems, as incorrect loop closures may result in severe errors in trajectory estimation. In the case of appearance-based methods the bag-of-words approach is typically employed for recognizing locations. This paper introduces a novel algorithm for improving loop closures detection performance(More)
In this paper, we give a double twist to the problem of planning under uncertainty. State-of-the-art planners seek to minimize the localization uncertainty by only considering the geometric structure of the scene. In this paper, we argue that motion planning for vision-controlled robots should be perception aware in that the robot should also favor(More)
Modern autonomous mobile robots require a strong understanding of their surroundings in order to safely operate in cluttered and dynamic environments. Monocular depth estimation offers a geometry-independent paradigm to detect free, navigable space with minimum space, and power consumption. These represent highly desirable features, especially for(More)
Following recent works on HRI for UAVs, we present a gesture recognition system which operates on the video stream recorded from a passive monocular camera installed on a quadcopter. While many challenges must be addressed for building a real-time vision-based gestural interface, in this paper we specifically focus on the problem of user personalization.(More)
A robust gesture recognition system is an essential component in many human-computer interaction applications. In particular, the widespread adoption of portable devices and the diffusion of autonomous systems with limited power and load capacity has increased the need of developing efficient recognition algorithms which operates on video streams recorded(More)