A Computational Method to Emulate Bottom-up Attention to Remote Sensing Images

Abstract

In this paper, we propose a computational model which is capable of emulating the expert’s bottom-up attention to remote sensing images. The bottom-up visual attention is a relatively primary step in neuroscience, and it can perfectly perform recognition if combined with context. Thus, efficient and fast bottom-up model is in need to give convenience to process context in following step. Our computational model well conforms to these conditions. The model cut down uncertain complication of visual attention by introduction of textons based on neurobiology and information entropy. First, our model processes images extremely rapidly while achieves relatively high hit rates. Second, our model provides rarity hierarchy by converting unique or rare visual attributes to number rare attribute for future processing. Third, our results provide size, shape and location information for the future context attention computation. * Corresponding author: Tao Fang. E-mail: tfang@sjtu.edu.cn; phone:+86-021-34204758

5 Figures and Tables

Cite this paper

@inproceedings{Chen2008ACM, title={A Computational Method to Emulate Bottom-up Attention to Remote Sensing Images}, author={X. Chen and Huan Huo and Fengbo Tao and D. Li and Z. Li}, year={2008} }