Jean-François Lalonde

Learn More
Detecting shadows from images can significantly improve the performance of several vision tasks such as object detection and tracking. Recent approaches have mainly used illumination invariants which can fail severely when the qualities of the images are not very good, as is the case for most consumer-grade photographs, like those on Google or Flickr. We(More)
In recent years, much progress has been made in outdoor autonomous navigation. However, safe navigation is still a daunting challenge in terrain containing vegetation. In this paper, we focus on the segmentation of ladar data into three classes using local three-dimensional point cloud statistics. The classes are: ”scatter” to represent porous volumes such(More)
We present a system for inserting new objects into existing photographs by querying a vast image-based object library, pre-computed using a publicly available Internet object database. The central goal is to shield the user from all of the arduous tasks typically involved in image compositing. The user is only asked to do two simple things: 1) pick a 3D(More)
Why does placing an object from one photograph into another often make the colors of that object suddenly look wrong? One possibility is that humans prefer distributions of colors that are often found in nature; that is, we find pleasing these color combinations that we see often. Another possibility is that humans simply prefer colors to be consistent(More)
Given a single outdoor image, we present a method for estimating the likely illumination conditions of the scene. In particular, we compute the probability distribution over the sun position and visibility. The method relies on a combination of weak cues that can be extracted from different portions of the image: the sky, the vertical surfaces, and the(More)
Three-dimensional ladar data are commonly used to perform scene understanding for outdoor mobile robots, specifically in natural terrain. One effective method is to classify points using features based on local point cloud distribution into surfaces, linear structures or clutter volumes. But the local features are computed using 3D points within a(More)
Webcams placed all over the world observe and record the visual appearance of a variety of outdoor scenes over long periods of time. The recorded time-lapse image sequences cover a wide range of illumination and weather conditions -- a vast untapped resource for creating visual realism. In this work, we propose to use a large repository of webcams as a(More)
As the main observed illuminant outdoors, the sky is a rich source of information about the scene. However, it is yet to be fully explored in computer vision because its appearance in an image depends on the sun position, weather conditions, photometric and geometric parameters of the camera, and the location of capture. In this paper, we analyze two(More)
Given a single outdoor image, we present a method for estimating the likely illumination conditions of the scene. In particular, we compute the probability distribution over the sun position and visibility. The method relies on a combination of weak cues that can be extracted from different portions of the image: the sky, the vertical surfaces, the ground,(More)
The appearance of an outdoor scene is determined to a great extent by the prevailing illumination conditions. However, most practical computer vision applications treat illumination more as a nuisance rather than a source of signal. In this dissertation, we suggest that we should instead embrace illumination, even in the challenging, uncontrolled world of(More)