Carol Neidle

Learn More
A new, linguistically annotated, video database for automatic sign language recognition is presented. The new RWTH-BOSTON-400 corpus, which consists of 843 sentences, several speakers and separate subsets for training, development, and testing is described in detail. For evaluation and benchmarking of automatic sign language recognition, large corpora are(More)
Research on recognition and generation of signed languages and the gestural component of spoken languages has been held back by the unavailability of large-scale linguistically annotated corpora of the kind that led to significant advances in the area of spoken language. A major obstacle has been the lack of computational tools to assist in efficient(More)
A method is presented to help users look up the meaning of an unknown sign from American Sign Language (ASL). The user submits a video of the unknown sign as a query, and the system retrieves the most similar signs from a database of sign videos. The user then reviews the retrieved videos to identify the video displaying the sign of interest. Hands are(More)
Alternations that are partly phonologically, partly morphologically conditioned are a central problem in phonological theory. In Optimality Theory, two types of solutions have been proposed: morphologically specialized phonological constraints (interface constraints) and di erent constraint rankings for di erent morphological categories (cophonologies).(More)
The lack of a written representation for American sign language (ASL) makes it difficult to do something as commonplace as looking up an unknown word in a dictionary. The majority of printed dictionaries organize ASL signs (represented in drawings or pictures) based on their nearest English translation; so unless one already knows the meaning of a sign,(More)
This paper addresses the problem of automatically recognizing linguistically significant nonmanual expressions in American Sign Language from video. We develop a fully automatic system that is able to track facial expressions and head movements, and detect and recognize facial events continuously from video. The main contributions of the proposed framework(More)
The American Sign Language Lexicon Video Dataset (ASLLVD) consists of videos of >3,300 ASL signs in citation form, each produced by 1-6 native ASL signers, for a total of almost 9,800 tokens. This dataset, including multiple synchronized videos showing the signing from different angles, will be shared publicly once the linguistic annotations and(More)
Most research in the field of sign language recognition has focused on the manual component of signing, despite the fact that there is critical grammatical information expressed through facial expressions and head gestures. We, therefore, propose a novel framework for robust tracking and analysis of nonmanual behaviors, with an application to sign language(More)
Looking up the meaning of an unknown sign is not nearly so straightforward as looking up a word from a written language in a dictionary. This paper describes progress in an ongoing project to build a system that helps users look up the meaning of ASL signs. An important part of the project is building a video database with examples of a large number of(More)
We present a data-driven dynamic coupling between discrete and continuous methods for tracking objects of high dofs, which overcomes the limitations of previous techniques. In our approach, two trackers work in parallel, and the coupling between them is based on the tracking error. We use a model-based continuous method to achieve accurate results and, in(More)