Visual Search at Pinterest

@article{Jing2015VisualSA,
  title={Visual Search at Pinterest},
  author={Yushi Jing and David C. Liu and Dmitry Kislyuk and Andrew Zhai and Jiajing Xu and Jeff Donahue and Sarah Tavel},
  journal={Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining},
  year={2015}
}
  • Yushi Jing, David C. Liu, Sarah Tavel
  • Published 28 May 2015
  • Computer Science
  • Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
We demonstrate that, with the availability of distributed computation platforms such as Amazon Web Services and open-source tools, it is possible for a small engineering team to build, launch and maintain a cost-effective, large-scale visual search system. We also demonstrate, through a comprehensive set of live experiments at Pinterest, that content recommendation powered by visual search improves user engagement. By sharing our implementation details and learnings from launching a commercial… 

Visual Discovery at Pinterest

This paper presents an overview of the visual discovery engine powering Pinterest's visual search and recommendation services, and shares the rationales behind technical and product decisions such as the use of object detection and interactive user interfaces.

Visual Search at eBay

This paper harnesses the availability of large image collection of eBay listings and state-of-the-art deep learning techniques to perform visual search at scale and shows benchmark on ImageNet dataset on which their approach is faster and more accurate than several unsupervised baselines.

Learning a Unified Embedding for Visual Search at Pinterest

A multi-task deep metric learning system to learn a single unified image embedding which can be used to power the authors' multiple visual search products and demonstrates that the proposed unified embedding improves both relevance and engagement of theVisual search products for both browsing and searching purposes when compared to existing specialized embeddings.

Visual Recommendation Use Case for an Online Marketplace Platform: allegro.pl

A small content-based visual recommendation project built as part of the Allegro online marketplace platform that extracted relevant data only from images, as they are inherently better at capturing visual attributes than textual offer descriptions.

Amazon Shop the Look: A Visual Search System for Fashion and Home

Shop the Look, a web-scale fashion and home product visual search system deployed at Amazon, is introduced and it is believed that the fast-growing Shop the Look service is shaping the way that customers shop on Amazon.

Shop The Look: Building a Large Scale Visual Shopping System at Pinterest

This work provides a holistic view of how Shop The Look, a shopping oriented visual search system, was built along with lessons learned from addressing shopping needs, including core technology across object detection and visual embeddings, serving infrastructure for realtime inference, and data labeling methodology for training/evaluation data collection and human evaluation.

The Design and Implementation of a Real Time Visual Search System on JD E-commerce Platform

The design and implementation of a visual search system for real time image retrieval on JD.com is presented, which can support real time visual search with hundreds of billions of product images at sub-second timescales and handle frequent image updates through distributed hierarchical architecture and efficient indexing methods.

Deep Learning based Large Scale Visual Recommendation and Search for E-Commerce

A unified Deep Convolutional Neural Network architecture is proposed, called VisNet, to learn embeddings to capture the notion of visual similarity, across several semantic granularities, to build a large scale Visual Search and Recommendation system for e-commerce.

Structured Visual Search via Composition-aware Learning

The authors' model output is trained to change symmetrically with respect to the input transformations, leading to a sensitive feature space, which leads to a highly efficient search technique, as the approach learns from fewer data using a smaller feature space.

Human Curation and Convnets: Powering Item-to-Item Recommendations on Pinterest

It is demonstrated that signals derived from user curation, the activity of users organizing content, are highly effective when used in conjunction with content-based ranking in an item-to-item recommendation system.
...

References

SHOWING 1-10 OF 29 REFERENCES

VisualRank: Applying PageRank to Large-Scale Image Search

  • Yushi JingS. Baluja
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2008
This work cast the image-ranking problem into the task of identifying "authority" nodes on an inferred visual similarity graph and proposes VisualRank to analyze the visual link structures among images and describes the techniques required to make this system practical for large-scale deployment in commercial search engines.

Large scale visual recommendations from street fashion images

A completely automated large scale visual recommendation system for fashion that efficiently harness the availability of large quantities of online fashion images and their rich meta-data and outlines a large-scale annotated data set of fashion images that can be exploited for future research in data driven visual fashion.

Image retrieval: Ideas, influences, and trends of the new age

Almost 300 key theoretical and empirical contributions in the current decade related to image retrieval and automatic image annotation are surveyed, and the spawning of related subfields are discussed, to discuss the adaptation of existing image retrieval techniques to build systems that can be useful in the real world.

Very Deep Convolutional Networks for Large-Scale Image Recognition

This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.

DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

DeCAF, an open-source implementation of deep convolutional activation features, along with all associated network parameters, are released to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.

Scalable Object Detection Using Deep Neural Networks

This work proposes a saliency-inspired neural network model for detection, which predicts a set of class-agnostic bounding boxes along with a single score for each box, corresponding to its likelihood of containing any object of interest.

ImageNet Large Scale Visual Recognition Challenge

The creation of this benchmark dataset and the advances in object recognition that have been possible as a result are described, and the state-of-the-art computer vision accuracy with human accuracy is compared.

Going deeper with convolutions

We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition

Caffe: Convolutional Architecture for Fast Feature Embedding

Caffe provides multimedia scientists and practitioners with a clean and modifiable framework for state-of-the-art deep learning algorithms and a collection of reference models for training and deploying general-purpose convolutional neural networks and other deep models efficiently on commodity architectures.

Fully convolutional networks for semantic segmentation

The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.