Paper Doll Parsing: Retrieving Similar Styles to Parse Clothing Items


Clothing recognition is an extremely challenging problem due to wide variation in clothing item appearance, layering, and style. In this paper, we tackle the clothing parsing problem using a retrieval based approach. For a query image, we find similar styles from a large database of tagged fashion images and use these examples to parse the query. Our approach combines parsing from: pre-trained global clothing models, local clothing models learned on the fly from retrieved examples, and transferred parse masks (paper doll item transfer) from retrieved examples. Experimental evaluation shows that our approach significantly outperforms state of the art in parsing accuracy.

DOI: 10.1109/ICCV.2013.437

Extracted Key Phrases

7 Figures and Tables

Citations per Year

101 Citations

Semantic Scholar estimates that this publication has 101 citations based on the available data.

See our FAQ for additional information.

Cite this paper

@article{Yamaguchi2013PaperDP, title={Paper Doll Parsing: Retrieving Similar Styles to Parse Clothing Items}, author={Kota Yamaguchi and M. Hadi Kiapour and Tamara L. Berg}, journal={2013 IEEE International Conference on Computer Vision}, year={2013}, pages={3519-3526} }