Using Sentences as Semantic Representations in Large Scale Zero-Shot Learning

@inproceedings{Cacheux2020UsingSA,
  title={Using Sentences as Semantic Representations in Large Scale Zero-Shot Learning},
  author={Yannick Le Cacheux and H. Borgne and M. Crucianu},
  booktitle={ECCV Workshops},
  year={2020}
}
Zero-shot learning aims to recognize instances of unseen classes, for which no visual instance is available during training, by learning multimodal relations between samples from seen classes and corresponding class semantic representations. These class representations usually consist of either attributes, which do not scale well to large datasets, or word embeddings, which lead to poorer performance. A good trade-off could be to employ short sentences in natural language as class descriptions… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 31 REFERENCES
Semantic embeddings of generic objects for zero-shot learning
  • 3
Write a Classifier: Zero-Shot Learning Using Purely Textual Descriptions
  • 231
  • PDF
Modeling Inter and Intra-Class Relations in the Triplet Loss for Zero-Shot Learning
  • 18
  • PDF
Predicting Deep Zero-Shot Convolutional Neural Networks Using Textual Descriptions
  • 291
  • PDF
DeViSE: A Deep Visual-Semantic Embedding Model
  • 1,556
  • PDF
From Classical to Generalized Zero-Shot Learning: a Simple Adaptation Process
  • 6
  • PDF
Evaluation of output embeddings for fine-grained image classification
  • 625
  • PDF
Zero-Shot Learning by Convex Combination of Semantic Embeddings
  • 634
  • PDF
A Generative Adversarial Approach for Zero-Shot Learning from Noisy Texts
  • 192
  • PDF
Deep contextualized word representations
  • 5,461
  • Highly Influential
  • PDF
...
1
2
3
4
...