Learning Speech-Based Video Concept Models Using WordNet

Abstract

Modeling concepts using supervised or unsupervised machine learning approaches are becoming more and more important for video semantic indexing, retrieval and filtering applications. Naturally, videos include multimodality audio, speech, visual and text data, that are combined to inferred therein the overall semantic concepts. However, in literature, most researches were mostly conducted within only one single domain. In this paper we propose an unsupervised technique that builds context-independent keyword lists for desired speech-based concept modeling from WordNet. Furthermore, we propose an extended speech-based video concept (ESVC) model to reorder and extend the above keyword lists by supervised learning based on multimodality annotation. Experimental results show that the context-independent models can achieve comparable performance to conventional supervised learning algorithms, and the ESVC model achieves about 53% and 28.4% relative improvement in two testing subsets of the TRECVID 2003 corpus over a prior state-of-the-art speech-based video concept detection algorithm.

5 Figures and Tables

Cite this paper

@inproceedings{Song2005LearningSV, title={Learning Speech-Based Video Concept Models Using WordNet}, author={Xiaodan Song and Ching-Yung Lin and Ming-Ting Sun}, year={2005} }