An efficient access method for multimodal video retrieval


Efficient and effective handling of video documents depends on the availability of indexes. Manual indexing is unfeasible for large video collections. Video combines different types of data from different modalities. Using information from multiple modalities may result in a more robust and accurate video retrieval. Therefore, effective indexing for video retrieval requires a multimodal approach in which either the most appropriate modality is selected or the different modalities are used in collaborative fashion. This paper presents a new metric access method -- Slim2-tree -- which combines information from multiple modalities within a single index structure for video retrieval. Experimental studies on a large real dataset show the video similarity search performance of the proposed technique. Additionally, we present experiments comparing our method against state-of-the-art of multimodal solutions. Comparative test results demonstrate that our technique improves the performance of video similarity queries.

DOI: 10.1007/s11042-014-1917-2

Extracted Key Phrases

10 Figures and Tables

Cite this paper

@article{Sperandio2013AnEA, title={An efficient access method for multimodal video retrieval}, author={Ricardo C. Sperandio and Zenilton Kleber Gonçalves do Patroc{\'i}nio and Hugo Bastos de Paula and Silvio Jamil Ferzoli Guimar{\~a}es}, journal={Multimedia Tools and Applications}, year={2013}, volume={74}, pages={1357-1375} }