• Corpus ID: 58305327

A new web interface to facilitate access to corpora: development of the ASLLRP data access interface

@inproceedings{Vogler2012ANW,
  title={A new web interface to facilitate access to corpora: development of the ASLLRP data access interface},
  author={Christian Vogler and C. Neidle},
  year={2012}
}
A significant obstacle to broad utilization of corpora is the difficulty in gaining access to the specific subsets of data and annotations that may be relevant for particular types of research. With that in mind, we have developed a web-based Data Access Interface (DAI), to provide access to the expanding datasets of the American Sign Language Linguistic Research Project (ASLLRP). The DAI facilitates browsing the corpora, viewing videos and annotations, searching for phenomena of interest, and… 

Figures from this paper

NEW shared & interconnected ASL resources: SignStream® 3 Software; DAI 2 for web access to linguistically annotated video corpora; and a sign bank
TLDR
A new version of SignStream® software, designed to facilitate linguistic analysis of ASL video, and visualizations of computer-generated analyses of the video: graphical display of eyebrow height, eye aperture, and head position.
ASL Video Corpora & Sign Bank: Resources Available through the American Sign Language Linguistic Research Project (ASLLRP)
The American Sign Language Linguistic Research Project (ASLLRP) provides Internet access to high-quality ASL video data, generally including front and side views and a close-up of the face. The
Effect of Automatic Sign Recognition Performance on the Usability of Video-Based Search Interfaces for Sign Language Dictionaries
TLDR
In addition to the position of the desired word in a list of results; the similarity of the other words in the results list also affected users' judgements of the system, and metrics that incorporate the precision of the overall list correlated better with users' Judgements than did metrics currently reported in prior ASL dictionary research.
Design and Evaluation of Hybrid Search for American Sign Language to English Dictionaries: Making the Most of Imperfect Sign Recognition
TLDR
A hybrid-search approach is presented, in which users begin with a video-based query and then filter the search results by linguistic properties, e.g., handshape.
Effect of Sign-recognition Performance on the Usability of Sign-language Dictionary Search
TLDR
It was found that metrics that incorporate the precision of the overall list correlated better with users’ judgements than did metrics currently reported in prior ASL dictionary research.
A multimedia corpus of the Yiddish language
TLDR
The differences between the multimedia corpus of the Yiddish language and similar multimedia corpuses and the advantages of the created query platform are described.
Extensions of the Sign Language Recognition and Translation Corpus RWTH-PHOENIX-Weather
This paper introduces the RWTH-PHOENIX-Weather 2014, a video-based, large vocabulary, German sign language corpus which has been extended over the last two years, tripling the size of the original
Quantitative Survey of the State of the Art in Sign Language Recognition
TLDR
This study compiles the state of the art in a concise way to help advance the field and reveal open questions, such as shifts in the field from intrusive to non-intrusive capturing, from local to global features and the lack of non-manual parameters included in medium and larger vocabulary recognition systems.
Documentary and Corpus Approaches to Sign Language Research
TLDR
The field of sign language corpus linguistics is introduced, carefully defining the term ‘corpus’ in this context, and discussing the emergence of technology that has made this new approach to sign language research possible.
MLSLT: Towards Multilingual Sign Language Translation
TLDR
Experimental results show that the average performance of MLSLT outperforms the baseline MSLT model and the com-bination of multiple BSLT models in many cases, and zero-shot translation in sign language is explored and the model can achieve comparable performance to the supervised B SLT model on some language pairs.
...
...

References

SHOWING 1-4 OF 4 REFERENCES
Challenges in development of the American Sign Language Lexicon Video Dataset (ASLLVD) corpus
TLDR
An example computer vision application that leverages the ASLLVD is reported: the formulation employs a HandShapes Bayesian Network (HSBN), which models the transition probabilities between start and end handshapes in monomorphemic lexical signs.
SignStream™: A database tool for research on visual-gestural language
SignStream™ is a MacOS application that provides a single computing environment within which to view, annotate, analyze, and search through video and/or audio data, making it useful for linguistic
SignStreamTM Annotation: Conventions used for the American Sign Language Linguistic Research Project
  • American Sign Language Linguistic Research Project Report No. 11, Boston University.
  • 2002
A Database Tool for Research on VisualGestural Language
  • Journal of Sign Language and Linguistics
  • 2002