A category-level 3-D object dataset: Putting the Kinect to work

Abstract

Recent proliferation of a cheap but quality depth sensor, the Microsoft Kinect, has brought the need for a challenging category-level 3D object detection dataset to the fore. We review current 3D datasets and find them lacking in variation of scenes, categories, instances, and viewpoints. Here we present our dataset of color and depth image pairs, gathered in real domestic and office environments. It currently includes over 50 classes, with more images added continuously by a crowd-sourced collection effort. We establish baseline performance in a PASCAL VOC-style detection task, and suggest two ways that inferred world size of the object may be used to improve detection. The dataset and annotations can be downloaded at http://www.kinectdata.com.

DOI: 10.1007/978-1-4471-4640-7_8
View Slides

Extracted Key Phrases

6 Figures and Tables

02040602011201220132014201520162017
Citations per Year

267 Citations

Semantic Scholar estimates that this publication has 267 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Janoch2011AC3, title={A category-level 3-D object dataset: Putting the Kinect to work}, author={Allison Janoch and Sergey Karayev and Yangqing Jia and Jonathan T. Barron and Mario Fritz and Kate Saenko and Trevor Darrell}, journal={2011 IEEE International Conference on Computer Vision Workshops (ICCV Workshops)}, year={2011}, pages={1168-1174} }