Barbara L. Loeding

Learn More
We consider two crucial problems in continuous sign language recognition from unaided video sequences. At the sentence level, we consider the movement epenthesis (me) problem and at the feature level, we consider the problem of hand segmentation and grouping. We construct a framework that can handle both of these problems based on an enhanced, nested(More)
One of the hard problems in automated sign language recognition is the movement epenthesis (me) problem. Movement epenthesis is the gesture movement that bridges two consecutive signs. This effect can be over a long duration and involve variations in hand shape, position, and movement, making it hard to explicitly model these intervening segments. This(More)
This paper reviews the extensive state of the art in automated recognition of continuous signs, from different languages, based on the data sets used, features computed, technique used, and recognition rates achieved. We find that, in the past, most work has been done in finger-spelled words and isolated sign recognition, however recently, there has been(More)
We have developed a video hand segmentation tool which can help with generating hands ground truth from sign language image sequences. This tool may greatly facilitate research in the area of sign language recognition. In this tool, we offer a semi automatic scheme to assist with the localization of hand pixels, which is important for the purpose of(More)
The common practice in sign language recognition is to first construct individual sign models, in terms of discrete state transitions, mostly represented using Hidden Markov Models , from manually isolated sign samples and then to use it to recognize signs in continuous sentences. In this paper we (i) propose a continuous state space model, where the states(More)
Some articulated motion representations rely on frame-wise abstractions of the statistical distribution of low-level features such as orientation, color, or relational distributions. As configuration among parts changes with articulated motion, the distribution changes, tracing a trajectory in the latent space of distributions, which we call the(More)
Recognition of signs in sentences requires a training set constructed out of signs found in continuous sentences. Currently, this is done manually, which is a tedious process. In this work, we consider a framework where the modeler just provides multiple video sequences of sign language sentences , constructed to contain the vocabulary of interest. We learn(More)
We present a probabilistic framework to automatically learn models of recurring signs from multiple sign language video sequences containing the vocabulary of interest. We extract the parts of the signs that are present in most occurrences of the sign in context and are robust to the variations produced by adjacent signs. Each sentence video is first(More)
  • 1