Lawson L. S. Wong

Learn More
Object search is an integral part of daily life, and in the quest for competent mobile manipulation robots it is an unavoidable problem. Previous approaches focus on cases where objects are in unknown rooms but lying out in the open, which transforms object search into active visual search. However, in real life, objects may be in the back of cupboards(More)
We consider the problem of grasping novel objects in cluttered environments. If a full 3-d model of the scene were available, one could use the model to estimate the stability and robustness of different grasps (formalized as form/force-closure, etc); in practice, however, a robot facing a novel object will usually be able to perceive only the front(More)
The Pedigree Visualizer is a system for visualization of pedigree diagrams. It accepts a simple text-based specification of a pedigree diagram. The pedigree diagram is then layout automatically. Both GIF- and PS-formatted output files are produced. In addition, the Pedigree Visualizer also provides a rich set of functions for the manipulation and management(More)
We present our vision-based system for grasping novel objects in cluttered environments. Our system can be divided into four components: 1) decide where to grasp an object, 2) perceive obstacles, 3) plan an obstacle-free path, and 4) follow the path to grasp the object. While most prior work assumes availability of a detailed 3-d model of the environment,(More)
In this paper, we describe our methodologies and empirical evaluations for the shot boundary detection and automatic video search tasks at TRECVID 2006. For the shot boundary detection task, we consider a simple and efficient solution. Our approach first applies adaptive thresholding on color histogram differences between frames to select candidates for(More)
Autonomous mobile-manipulation robots need to sense and interact with objects to accomplish high-level tasks such as preparing meals and searching for objects. To achieve such tasks, robots need semantic world models, defined as object-based representations of the world involving task-level attributes. In this work, we address the problem of estimating(More)
Spatial representations are fundamental to mobile robots operating in uncertain environments. Two frequently-used representations are occupancy grid maps, which only model metric information, and object-based world models, which only model object attributes. Many tasks represent space in just one of these two ways; however, because objects must be(More)
In state estimation, we often want the maximum likelihood estimate of the current state. For the commonly used joint multivariate Gaussian distribution over the state space, this can be efficiently found using a Kalman filter. However, in complex environments the state space is often highly constrained. For example, for objects within a refrigerator, they(More)
Autonomous mobile-manipulation robots need to sense and interact with objects to accomplish high-level tasks such as preparing meals and searching for objects. Behavior in these tasks is typically guided by goals supplied to tasklevel planners, which in turn assume a representation of the world in terms of objects. In this work, we explore the use of(More)
Humans can ground natural language commands to tasks at both abstract and fine-grained levels of specificity. For instance, a human forklift operator can be instructed to perform a high-level action, like “grab a pallet” or a low-level action like “tilt back a little bit.” While robots are also capable of grounding language commands to tasks, previous(More)