Quqing Chen

Learn More
This paper presents a principled and practical method for the computation of visual saliency of spatiotemporal events in full motion videos. Based on the assumption that uniqueness or informative-ness correlates with saliency, our model predicts the saliency of a spatiotemporal event based on the information it contains. To compute the uniqueness of the(More)
— Existing video coding methods can cause visual quality and buffer occupancy to fluctuate significantly at scene cuts. To address this problem, we have developed a novel visual attention based adaptive bit allocation method. We first perform scene cut detection to extract frames in the vicinities of dramatic scene changes; we then perform visual saliency(More)
We propose an efficient compression algorithm for massive models, which consist of a large number of small to medium sized connected components. It is based on efficiently exploiting repetitive patterns in the input model. Compared with [Shikhare et al. 2001], the state-of-the-art work for utilizing repetitive patterns for compressing massive models, our(More)
We propose a new compression algorithm for massive models, which consist of a large number of small to medium sized connected components. It is by efficiently exploiting repetitive patterns in the input model. Compared with the similar work by finding repetitive patterns, our new algorithm is more efficient on detecting repeated components by recognizing(More)