Relative attributes

@article{Parikh2011RelativeA,
  title={Relative attributes},
  author={Devi Parikh and Kristen Grauman},
  journal={2011 International Conference on Computer Vision},
  year={2011},
  pages={503-510}
}
  • Devi Parikh, K. Grauman
  • Published 6 November 2011
  • Computer Science
  • 2011 International Conference on Computer Vision
Human-nameable visual “attributes” can benefit various recognition tasks. However, existing techniques restrict these properties to categorical labels (for example, a person is ‘smiling’ or not, a scene is ‘dry’ or not), and thus fail to capture more general semantic relationships. We propose to model relative attributes. Given training data stating how object/scene categories relate according to different attributes, we learn a ranking function per attribute. The learned ranking functions… 

Figures and Tables from this paper

Active Learning for Image Ranking Over Relative Visual Attributes
TLDR
This work investigates three active learning methods for image ranking over relative visual attributes designed to minimize the annotation effort, and introduces a novel form of active sample selection that selects training samples that are both visually diverse and satisfy the low-margin property.
Relative Attributes for Enhanced Human-Machine Communication
TLDR
Overall, it is found that relative attributes enhance the precision of communication between humans and computer vision algorithms, providing the richer language needed to fluidly "teach" a system about visual concepts.
Semantic Transform: Weakly Supervised Semantic Inference for Relating Visual Attributes
TLDR
The Semantic Transform is introduced, which under minimal supervision, adaptively finds a semantic feature space along with a class ordering that is related in the best possible way to relate the classes under weak supervision.
Attributes as Operators
TLDR
This work proposes to model attributes as operators, a new approach to modeling visual attributes that learns a semantic embedding that explicitly factors out attributes from their accompanying objects, and also benefits from novel regularizers expressing attribute operators' effects.
Distinctive Parts for Relative attributes
TLDR
A part based representation that jointly represents a pair of images that explicitly encodes correspondences among parts, thus better capturing minute differences in parts that make an attribute more prominent in one image than another as compared to global representation is proposed.
Inferring Analogous Attributes
TLDR
This work develops a tensor factorization approach which, given a sparse set of class-specific attribute classifiers, can infer new ones for object-attribute pairs unobserved during training and demonstrates both the need for category-sensitive attributes as well as the method's successful transfer.
Relative Parts: Distinctive Parts for Learning Relative Attributes
TLDR
This paper introduces a part-based representation combining a pair of images that specifically compares corresponding parts and associates a locally adaptive "significance-coefficient" that represents its discriminative ability with respect to a particular attribute.
Fine-Grained Comparisons with Attributes
TLDR
Local learning approaches for fine-grained visual comparisons, where a predictive model is trained on the fly using only the data most relevant to the novel input, outperform state-of-the-art methods for relative attribute prediction on challenging datasets, including a large newly curated shoe dataset.
Beyond Comparing Image Pairs: Setwise Active Learning for Relative Attributes
  • Lucy Liang, K. Grauman
  • Computer Science
    2014 IEEE Conference on Computer Vision and Pattern Recognition
  • 2014
TLDR
This work introduces a novel criterion that requests a partial ordering for a set of examples that minimizes the total rank margin in attribute space, subject to a visual diversity constraint and develops an efficient strategy to search for sets that meet this criterion.
Attributes as Operators: Factorizing Unseen Attribute-Object Compositions
TLDR
This work proposes to model attributes as operators, a new approach to modeling visual attributes that learns a semantic embedding that explicitly factors out attributes from their accompanying objects, and also benefits from novel regularizers expressing attribute operators’ effects.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 35 REFERENCES
Interactively building a discriminative vocabulary of nameable attributes
TLDR
An approach to define a vocabulary of attributes that is both human understandable and discriminative is introduced, and a novel "nameability" manifold is proposed that prioritizes candidate attributes by their likelihood of being associated with a nameable property.
Describing objects by their attributes
TLDR
This paper proposes to shift the goal of recognition from naming to describing, and introduces a novel feature selection method for learning attributes that generalize well across categories.
Joint learning of visual attributes, object classes and visual saliency
  • G. Wang, D. Forsyth
  • Computer Science
    2009 IEEE 12th International Conference on Computer Vision
  • 2009
TLDR
A method to learn visual attributes and object classes together and shows that the more accurate of the two models can guide the improvement of the less accurate model.
Learning Visual Attributes
TLDR
It is shown that attributes can be learnt starting from a text query to Google image search, and can then be used to recognize the attribute and determine its spatial extent in novel real-world images.
Attribute-centric recognition for cross-category generalization
TLDR
This work introduces a new dataset that provides annotation for sharing models of appearance and correlation across categories and uses it to learn part and category detectors that serve as the visual basis for an integrated model of objects.
A Discriminative Latent Model of Object Classes and Attributes
TLDR
This work presents a discriminatively trained model for joint modelling of object class labels and their visual attributes and captures the correlations among attributes using an undirected graphical model built from training data.
Learning to detect unseen object classes by between-class attribute transfer
TLDR
The experiments show that by using an attribute layer it is indeed possible to build a learning object detection system that does not require any training images of the target classes, and assembled a new large-scale dataset, “Animals with Attributes”, of over 30,000 animal images that match the 50 classes in Osherson's classic table of how strongly humans associate 85 semantic attributes with animal classes.
Learning Models for Object Recognition from Natural Language Descriptions
TLDR
This work proposes natural language processing methods for extracting salient visual attributes from natural language descriptions to use as ‘templates’ for the object categories, and applies vision methods to extract corresponding attributes from test images.
Automatic Attribute Discovery and Characterization from Noisy Web Data
TLDR
This work focuses on discovering attributes and their visual appearance, and is as agnostic as possible about the textual description, and characterizes attributes according to their visual representation: global or local, and type: color, texture, or shape.
Attribute and simile classifiers for face verification
TLDR
Two novel methods for face verification using binary classifiers trained to recognize the presence or absence of describable aspects of visual appearance and a new data set of real-world images of public figures acquired from the internet.
...
1
2
3
4
...