Supplementary Materials for ‘ Salient Object Detection : A Discriminative Regional Feature Integration Approach ’

Abstract

In this supplementary material, we will present more details on learning a Random Forest saliency regressor. More evaluation results with state-of-the-art algorithms are also presented. F 1 LEARNING 1.1 Learning a Similarity Score between Two Adjacent Superpixels To learn the similarity score of two adjacent superpixels si and sj , they are described by a 222dimensional feature vector, including their saliency features, feature contrast, and the geometry features between them. Saliency features are already introduced in our paper. Feature contrast and superpixel boundary geometry features are presented in Fig. 1. 1.2 Feature importance in a Random Forest Training a Random Forest regressor is to independently build each decision tree. For the t-th decision tree, the training samples are randomly drawn from all training samples with replacement, Xt = {x1,x2, · · · ,xQ}, At = {a1, a2, · · · , aQ}, where ti ∈ [1, Q], i ∈ [1, Q]. When learning a decision tree, the training samples are randomly drawn with replacement. In another word, some samples are not used for training. These training samples are called out-of-bag (oob) data. After constructing a decision tree, those oob data can be utilized to estimate the importance of features. Suppose that the feature f was used to construct one of the nodes of the tree and Doob are the oob samples. We first compute the prediction error for these oob data based on the i-th decision tree.

Cite this paper

@inproceedings{Jiang2014SupplementaryMF, title={Supplementary Materials for ‘ Salient Object Detection : A Discriminative Regional Feature Integration Approach ’}, author={Huaizu Jiang and Zejian Yuan and Ming-Ming Cheng and Yihong Gong and Nanning Zheng and Jingdong Wang}, year={2014} }