Learning to Learn from Web Data through Deep Semantic Embeddings

@inproceedings{Gomez2018LearningTL,
  title={Learning to Learn from Web Data through Deep Semantic Embeddings},
  author={R. Gomez and Llu{\'i}s G{\'o}mez and J. Gibert and Dimosthenis Karatzas},
  booktitle={ECCV Workshops},
  year={2018}
}
  • R. Gomez, Lluís Gómez, +1 author Dimosthenis Karatzas
  • Published in ECCV Workshops 2018
  • Computer Science
  • In this paper we propose to learn a multimodal image and text embedding from Web and Social Media data, aiming to leverage the semantic knowledge learnt in the text domain and transfer it to a visual model for semantic image retrieval. [...] Key Result Finally, we present a new dataset, InstaCities1M, composed by Instagram images and their associated texts that can be used for fair comparison of image-text embeddings.Expand Abstract

    References

    Publications referenced by this paper.
    SHOWING 1-10 OF 42 REFERENCES
    LEWIS: Latent Embeddings for Word Images and Their Semantics
    15
    Zero-Shot Learning by Convex Combination of Semantic Embeddings
    558
    Learning Deep Structure-Preserving Image-Text Embeddings
    413
    Relaxing from Vocabulary: Robust Weakly-Supervised Deep Learning for Vocabulary-Free Image Tagging
    33
    Dynamic Lexicon Generation for Natural Scene Images
    11
    A Multi-modal Hashing Learning Framework for Automatic Image Annotation
    6
    WebVision Challenge: Visual Learning and Understanding With Web Data
    8