Thi Quynh Nhi Tran

Learn More
Cross-modal tasks occur naturally for multimedia content that can be described along two or more modalities like visual content and text. Such tasks require to " translate " information from one modality to another. Methods like ker-nelized canonical correlation analysis (KCCA) attempt to solve such tasks by finding aligned subspaces in the description(More)
This paper describes our participation to the ImageCLEF 2016 scalable concept image annotation main task and Text Illustration teaser. Regarding image annotation, we focused on better localizing the detected features. For this, we identified the saliency of the image to collect a list of potential interesting places into the image. We also added a specific(More)
Cross-modal retrieval increasingly relies on joint statistical models built from large amounts of data represented according to several modalities. However, some information that is poorly represented by these models can be very significant for a retrieval task. We show that, by appropriately identifying and taking such information into account, the results(More)
  • 1