Effectively Leveraging Attributes for Visual Similarity

@article{Mishra2021EffectivelyLA,
  title={Effectively Leveraging Attributes for Visual Similarity},
  author={Samarth Mishra and Zhongping Zhang and Yuan Shen and Ranjitha Kumar and Venkatesh Saligrama and Bryan A. Plummer},
  journal={2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)},
  year={2021},
  pages={3899-3904}
}
Measuring similarity between two images often requires performing complex reasoning along different axes (e.g., color, texture, or shape). Insights into what might be important for measuring similarity can be provided by annotated attributes. Prior work tends to view these annotations as complete, resulting in them using a simplistic approach of predicting attributes on single images, which are, in turn, used to measure similarity. However, it is impractical for a dataset to fully annotate… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 60 REFERENCES
Conditional Similarity Networks
TLDR
This work proposes Conditional Similarity Networks (CSNs) that learn embeddings differentiated into semantically distinct subspaces that capture the different notions of similarities. Expand
Relative attributes
TLDR
This work proposes a generative model over the joint space of attribute ranking outputs, and proposes a novel form of zero-shot learning in which the supervisor relates the unseen object category to previously seen objects via attributes (for example, ‘bears are furrier than giraffes’). Expand
Learning Similarity Conditions Without Explicit Supervision
TLDR
This work proposes an approach that jointly learns representations for the different similarity conditions and their contributions as a latent variable without explicit supervision, and shows that the model outperforms the state-of-the-art methods, even those that are strongly supervised with pre-defined similarity conditions. Expand
Why do These Match? Explaining the Behavior of Image Similarity Models
TLDR
Salient Attributes for Network Explanation is introduced to explain image similarity models, where a model's output is a score measuring the similarity of two inputs rather than a classification score, and can also improve performance on the classic task of attribute recognition. Expand
Metric Learning With HORDE: High-Order Regularizer for Deep Embeddings
TLDR
This paper tackles the scattering problem with a distribution-aware regularization named HORDE, which enforces visually-close images to have deep features with the same distribution which are well localized in the feature space. Expand
Learning Visual Clothing Style with Heterogeneous Dyadic Co-Occurrences
With the rapid proliferation of smart mobile devices, users now take millions of photos every day. These include large numbers of clothing and accessory images. We would like to answer questions likeExpand
Deep Image Retrieval: Learning Global Representations for Image Search
TLDR
This work proposes a novel approach for instance-level image retrieval that produces a global and compact fixed-length representation for each image by aggregating many region-wise descriptors by leveraging a ranking framework and projection weights to build the region features. Expand
Give Me a Hint! Navigating Image Databases Using Human-in-the-Loop Feedback
TLDR
An attribute-based interactive image search which can leverage human-in-the-loop feedback to iteratively refine image search results is introduced and the recently introduced Conditional Similarity Network is extended to incorporate global similarity in training visual embeddings, which results in more natural transitions as the user explores the learned similarityembeddings. Expand
Large-Scale Image Retrieval with Attentive Deep Local Features
TLDR
An attentive local feature descriptor suitable for large-scale image retrieval, referred to as DELE (DEep Local Feature), based on convolutional neural networks, which are trained only with image-level annotations on a landmark image dataset. Expand
Learning Compatibility Across Categories for Heterogeneous Item Recommendation
TLDR
This paper proposes a novel method, Monomer, to learn complicated and heterogeneous relationships between items in product recommendation settings and achieves state-of-the-art performance on large-scale compatibility prediction tasks, especially in cases where there is substantial heterogeneity between related items. Expand
...
1
2
3
4
5
...