Large-scale image/videos' automatic annotation and retrieval based on the distributed users
The majority of today’s content based image retrieval systems rely on low-level image descriptors which limit their capability to support meaningful interactions with the users. Even though relevance feedback helps, most of the current interaction paradigms are far from the semantic representations which most people use to categorize and describe image content. Therefore we propose a concept called “vocabulary-supported image retrieval” which aims to enable the user to access an image database in a more natural way. In particular this paper develops a technique to predict the system’s performance with respect to the user query. This allows the system to translate the user query into an internal query which may satisfy predefined criteria such as precision and recall rates. In addition, given the performance parameters of the system’s sub-components, the feasibility and the success of the retrieval process can be evaluated beforehand and optimized dynamically online.