José Javier Yebes Torres

Learn More
— Text detection and recognition in images taken in uncontrolled environments still remains a challenge in computer vision. This paper presents a method to extract the text depicted in road panels in street view images as an application to Intelligent Transportation Systems (ITS). It applies a text detection algorithm to the whole image together with a(More)
—Traffic signs detection and recognition has been thoroughly studied for a long time. However, traffic panel detection and recognition still remains a challenge in computer vision due to its different types and the huge variability of the information depicted in them. This paper presents a method to detect traffic panels in street-level images and to(More)
— Visual loop closure detection plays a key role in navigation systems for intelligent vehicles. Nowadays, state-of-the-art algorithms are focused on unidirectional loop closures, but there are situations where they are not sufficient for identifying previously visited places. Therefore, the detection of bidirectional loop closures when a place is revisited(More)
— This paper carries out a discussion on the supervised learning of a car detector built as a Discriminative Part-based Model (DPM) from images in the recently published KITTI benchmark suite as part of the object detection and orientation estimation challenge. We present a wide set of experiments and many hints on the different ways to supervise and(More)
— This paper presents a non-intrusive approach for drowsiness detection, based on computer vision. It is installed in a car and it is able to work under real operation conditions. An IR camera is placed in front of the driver, in the dashboard, in order to detect his face and obtain drowsiness clues from their eyes closure. It works in a robust and(More)
An automatic text recognizer needs, in first place, to localize the text in the image the more accurately possible. For this purpose, we present in this paper a robust method for text detection. It is composed of three main stages: a segmentation stage to find character candidates, a connected component analysis based on fast-to-compute but robust features(More)
Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in(More)