Colin McManus

Learn More
In this paper we propose the use of an illumination invariant transform to improve many aspects of visual localisation, mapping and scene classification for autonomous road vehicles. The illumination invariant colour space stems from modelling the spectral properties of the camera and scene illumination in conjunction, and requires only a single parameter(More)
Visual Teach and Repeat (VT&R) has proven to be an effective method to allow a vehicle to autonomously repeat any previously driven route without the need for a global positioning system. One of the major challenges for a method that relies on visual input to recognize previously visited places is lighting change, as this can make the appearance of a(More)
This paper is about extending the reach and endurance of outdoor localisation using stereo vision. At the heart of the localisation is the fundamental task of discovering feature correspondences between recorded and live images. One aspect of this problem involves deciding where to look for correspondences in an image and the second is deciding what to look(More)
cattle and horse breeds to begin, and, in the near future, work with asses, buffalo and sheep will be conducted.. From the results of this research it will be possible to compare the native breeds and estimate genetic distances between them. The harmonisation of chosen micro-satellites with those which have been used in other Latin America and Iberian(More)
This paper is about localising across extreme lighting and weather conditions. We depart from the traditional point-feature-based approach since matching under dramatic appearance changes is a brittle and hard. Point-feature detectors are rigid procedures which pass over an image examining small, low-level structure such as corners or blobs. They apply the(More)
Cameras have emerged as the dominant sensor modality for localization and mapping in three-dimensional, unstructured terrain, largely due to the success of sparse, appearance-based techniques, such as visual odometry. However, the Achilles' heel for all camera-based systems is their dependence on consistent ambient lighting, which poses a serious problem in(More)
This paper is concerned with the problem of egomotion estimation in highly dynamic, heavily cluttered urban environments over long periods of time. This is a challenging problem for vision-based systems because extreme scene movement caused by dynamic objects (e.g., enormous buses) can result in erroneous motion estimates. We describe two methods that(More)
In this paper we propose the hybrid use of illuminant invariant and RGB images to perform image classification of urban scenes despite challenging variation in lighting conditions. Coping with lighting change (and the shadows thereby invoked) is a non-negotiable requirement for long term autonomy using vision. One aspect of this is the ability to reliably(More)
Visual-teach-and-repeat (VT&R) systems have proven extremely useful for practical robot autonomy where the global positioning system is either unavailable or unreliable, examples include tramming for underground mining using a planar laser scanner as well as a return-to-lander function for planetary exploration using a stereo-or laser-based camera. By(More)