Predicting Multiple Attributes via Relative Multi-task Learning

@article{Chen2014PredictingMA,
  title={Predicting Multiple Attributes via Relative Multi-task Learning},
  author={Lin Chen and Qiang Zhang and Baoxin Li},
  journal={2014 IEEE Conference on Computer Vision and Pattern Recognition},
  year={2014},
  pages={1027-1034}
}
Relative attributes learning aims to learn ranking functions describing the relative strength of attributes. Most of current learning approaches learn ranking functions for each attribute independently without considering possible intrinsic relatedness among the attributes. For a problem involving multiple attributes, it is reasonable to assume that utilizing such relatedness among the attributes would benefit learning, especially when the number of labeled training pairs are very limited. In… 

Figures and Tables from this paper

Deep Relative Attributes
TLDR
A novel deep relative attributes (DRA) algorithm to learn visual features and the effective nonlinear ranking function to describe the RA of image pairs in a unified framework that consistently and significantly outperforms the state-of-the-art RA learning methods.
A Unified Multiplicative Framework for Attribute Learning
TLDR
This paper proposes a unified multiplicative framework for attribute learning, where images and category information are jointly projected into a shared feature space, where the latent factors are disentangled and multiplied for attribute prediction.
Clustering-Based Joint Feature Selection for Semantic Attribute Prediction
TLDR
A novel feature selection approach which embeds attribute correlation modeling in multi-attribute joint feature selection and significantly outperforms the state-of-the-art approaches is proposed.
From Common to Special: When Multi-Attribute Learning Meets Personalized Opinions
TLDR
In the proposed model, the diversity of personalized opinions and the intrinsic relationship among multiple attributes are unified in a common-to-special manner and the model integrates a common cognition factor, an attribute- specific bias factor and a user-specific bias factor.
Clustered Multitask Feature Learning for Attribute Prediction Anonymous CVPR submission
  • Computer Science
  • 2015
TLDR
A novel clustered multi-task feature selection approach utilizing K-means and group sparsity regularizers, and an efficient alternating optimization algorithm is proposed that can automatically capture the task structure and result in obvious performance gain in attribute prediction, when compared with existing state-of-the-art approaches.
Sparse Feature Preservation for Relative Attribute Learning
TLDR
A sparse feature preservation (SFP) method to preserve the most important features on the learning of each attribute model, formulated through using rearrangement inequality according to relative attribute models learning.
Unifying Visual Attribute Learning with Object Recognition in a Multiplicative Framework
TLDR
This paper proposes a unified multiplicative framework for attribute learning that can both accurately predict attributes and learn efficient image representations and can improve the state-of-the-art performance on several widely used datasets.
Learning Attributes Equals Multi-Source Domain Generalization
TLDR
This work provides a novel perspective to attribute detection and proposes to gear the techniques in multi-source domain generalization for the purpose of learning cross-category generalizable attribute detectors.
Incomplete Attribute Learning with auxiliary labels
TLDR
The experimental results show that the proposed method can achieve the state-of-the-art performance with access to partially observed attribute annotations and can be solved efficiently in an alternative way by optimizing quadratic programming problems and updating parameters in closedform solutions.
Attribute-correlated local regions for deep relative attributes learning
TLDR
The concatenation of the high-level global feature and intermediate local feature is adopted to predict the relative attributes and it is shown that the proposed method produces a competitive performance compared with the state of the art in relative attribute prediction on three public benchmarks.
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 27 REFERENCES
Relative attributes
TLDR
This work proposes a generative model over the joint space of attribute ranking outputs, and proposes a novel form of zero-shot learning in which the supervisor relates the unseen object category to previously seen objects via attributes (for example, ‘bears are furrier than giraffes’).
A Framework for Learning Predictive Structures from Multiple Tasks and Unlabeled Data
TLDR
This paper presents a general framework in which the structural learning problem can be formulated and analyzed theoretically, and relate it to learning with unlabeled data, and algorithms for structural learning will be proposed, and computational issues will be investigated.
Clustered Multi-Task Learning: A Convex Formulation
TLDR
A new spectral norm is designed that encodes this a priori assumption that tasks are clustered into groups, which are unknown beforehand, and that tasks within a group have similar weight vectors, resulting in a new convex optimization formulation for multi-task learning.
Integrating low-rank and group-sparse structures for robust multi-task learning
TLDR
A robust multi-task learning algorithm which learns multiple tasks simultaneously as well as identifies the irrelevant (outlier) tasks, and derives a key property of the optimal solution to RMTL, which establishes a theoretical bound for characterizing the learning performance of R MTL.
Robust multi-task feature learning
TLDR
This paper proposes a Robust Multi-Task Feature Learning algorithm (rMTFL) which simultaneously captures a common set of features among relevant tasks and identifies outlier tasks, and provides a detailed theoretical analysis on the proposed rMTFL formulation.
Regularized multi--task learning
TLDR
An approach to multi--task learning based on the minimization of regularization functionals similar to existing ones, such as the one for Support Vector Machines, that have been successfully used in the past for single-- task learning is presented.
Clustered Multi-Task Learning Via Alternating Structure Optimization
TLDR
The equivalence relationship between ASO and CMTL is shown, providing significant new insights into ASO as well as their inherent relationship, and the proposed convex CMTl formulation is significantly more efficient especially for high-dimensional data.
Graph-Structured Multi-task Regression and an Efficient Optimization Method for General Fused Lasso
TLDR
This paper proposes graph-guided fused lasso (GFlasso) for structured multi-task regression that exploits the graph structure over the output variables and introduces a novel penalty function based on fusion penalty to encourage highly correlated outputs to share a common set of relevant inputs.
Multi-Task Learning for Classification with Dirichlet Process Priors
TLDR
Experimental results on two real life MTL problems indicate that the proposed algorithms automatically identify subgroups of related tasks whose training data appear to be drawn from similar distributions are more accurate than simpler approaches such as single-task learning, pooling of data across all tasks, and simplified approximations to DP.
WhittleSearch: Image search with relative attribute feedback
TLDR
A novel mode of feedback for image search, where a user describes which properties of exemplar images should be adjusted in order to more closely match his/her mental model of the image(s) sought, which outperforms traditional binary relevance feedback in terms of search speed and accuracy.
...
1
2
3
...