Latent Task Adaptation with Large-Scale Hierarchies

Abstract

Recent years have witnessed the success of large-scale image classification systems that are able to identify objects among thousands of possible labels. However, it is yet unclear how general classifiers such as ones trained on Image Net can be optimally adapted to specific tasks, each of which only covers a semantically related subset of all the objects in the world. It is inefficient and sub optimal to retrain classifiers whenever a new task is given, and is inapplicable when tasks are not given explicitly, but implicitly specified as a set of image queries. In this paper we propose a novel probabilistic model that jointly identifies the underlying task and performs prediction with a linear-time probabilistic inference algorithm, given a set of query images from a latent task. We present efficient ways to estimate parameters for the model, and an open-source toolbox to train classifiers distributedly at a large scale. Empirical results based on the Image Net data showed significant performance increase over several baseline algorithms.

DOI: 10.1109/ICCV.2013.260

Extracted Key Phrases

7 Figures and Tables

Cite this paper

@article{Jia2013LatentTA, title={Latent Task Adaptation with Large-Scale Hierarchies}, author={Yangqing Jia and Trevor Darrell}, journal={2013 IEEE International Conference on Computer Vision}, year={2013}, pages={2080-2087} }