Learn More
Given a set of objects in a scene whose identifications are ambiguous, it is often possible to use relationships among the objects to reduce or eliminate the ambiguity. A striking example of this approach was given by Waltz [13]. This paper formulates the ambiguity-reduction process in terms of iterated parallel operations (i.e., relaxation operations)(More)
A large class of problems can be formulated in terms of the assignment of labels to objects. Frequently, processes are needed which reduce ambiguity and noise, and select the best label among several possible choices. Relaxation labeling processes are just such a class of algorithms. They are based on the parallel use of local constraints between labels.(More)
—The standard approach to edge detection is based on a model of edges as large step changes in intensity. This approach fails to reliably detect and localize edges in natural images where blur scale and contrast can vary over a broad range. The main problem is that the appropriate spatial scale for local estimation depends upon the local structure of the(More)
We describe a novel approach to curve inference based on curvature information. The inference procedure is divided into two stages: a trace inference stage, to which this paper is devoted, and a curve synthesis stage, which will be treated in a separate paper. It is shown that recovery of the trace of a curve requires estimating local models for the curve(More)
We have been developing a theory for the generic representation of 2-D shape, where structural descriptions are derived from the shocks (singularities) of a curve evolution process, acting on bounding contours. We now apply the theory to the problem of shape matching. The shocks are organized into a directed, acyclic shock graph, and complexity is managed(More)
We undertake to develop a general theory of two-dimensional shape by elucidating several principles which any such theory should meet. The principles are organized around two basic intuitions: first, if a boundary were changed only slightly, then, in general, its shape would change only slightly. This leads us to propose an operational theory of shape based(More)
Existing methods for grouping edges on the basis of local smoothness measures fail to compute complete contours in natural images: it appears that a stronger global constraint is required. Motivated by growing evidence that the human visual system exploits contour closure for the purposes of perceptual grouping 6, 7, 14, 15, 255, we present an algorithm for(More)
ÐIt is well-known that the problem of matching two relational structures can be posed as an equivalent problem of finding a maximal clique in a (derived) ªassociation graph.º However, it is not clear how to apply this approach to computer vision problems where the graphs are hierarchically organized, i.e., are trees, since maximal cliques are not(More)
The eikonal equation and variants of it are of significant interest for problems in computer vision and image processing. It is the basis for continuous versions of mathematical morphology, stereo, shape-from-shading and for recent dynamic theories of shape. Its numerical simulation can be delicate, owing to the formation of singularities in the evolving(More)
A number of active contour models have been proposed that unify the curve evolution framework with classical energy minimization techniques for segmentation, such as snakes. The essential idea is to evolve a curve (in two dimensions) or a surface (in three dimensions) under constraints from image forces so that it clings to features of interest in an(More)