Amel Znaidia

Learn More
We report the participation of the CEA LIST to the Scalable Concept Image Annotation Subtask of ImageCLEF 2013. The full system is based on both textual and visual similarity to each concept, that are merged by late fusion. Each image is visually represented with a bag of visterm, computed from a dense grid of SIFT every 3 pixels, that a locally soft coded(More)
We address the problem of tag completion for automatic image annotation. Our method consists in two main steps: creating a list of "candidate tags" from the visual neighbors of the untagged image then using them as pieces of evidence to be combined to provide the final list of predicted tags. Both steps introduce a scheme to tackle with imprecision and(More)
The automatic attribution of semantic labels to unlabeled or weakly labeled images has received considerable attention but, given the complexity of the problem, remains a hard research topic. Here we propose a unified classification framework which mixes textual and visual information in a seamless manner. Unlike most recent previous works, computer vision(More)
We introduce the bag-of-multimedia-words model that tightly combines the heterogeneous information coming from the text and the pixel-based information of a multimedia document. The proposed multimedia feature generation process is generic for any multimodality and aims at enriching a multimedia document description with compact and discriminative(More)
This paper describes the CEA LIST participation in the ImageCLEF 2011 Photo Annotation challenge. This year, our motivation was to investigate the annotation performance by using provided Flickr-tags as additionnal information. First, we present an overview of our local and global visual features used in this work. Second, we present a new method, that we(More)
This paper describes our participation to the ImageCLEF2012 Photo Annotation Task. We focus on how to use the tags associated to the images to improve the annotation performance. We submitted one textual-only and three multimodal runs. Our first textual model [14] is based on the local soft coding of images tags over a dictionary of most frequent tags. A(More)
Classifier combination is known to generally perform better than each individual classifier by taking into account the complementarity between the input pieces of information. Dempster-Shafer theory is a framework of interest to make such a fusion at the decision level, and allows in addition to handle the conflict that can exist between the classifiers as(More)
Image annotation consists in describing the image content according to a finite number of a priori fixed concepts. In practice, we use two modalities for this : image and usertags. However, these tags are generally imperfect and only a part of them are related to the image content. In this work, we are interested in taking into account tag imperfection to(More)
  • 1