Occlusion robust free-viewpoint video synthesis based on inter-camera/-frame interpolation

Abstract

In this paper, we propose a novel free-viewpoint video synthesis method that adaptively extracts the texture even from occluded areas. The conventional method based on object segmentation and inter-camera interpolation has two major problems. One is that textures of incorrectly segmented objects degrade image quality in the synthesized output of free-viewpoint video. For example, some object textures have missing regions and other object textures include unwanted regions. The other problem is that the inter-camera interpolation often causes inconsistency between the object appearance and corresponding moving direction. In order to overcome these problems, we propose a new texture acquisition scheme based on inter-frame interpolation to handle the case where object segmentation and inter-camera interpolation are both insufficient. In addition, the proposed method enables adaptive selection among three texture acquisition schemes, segmentation, inter-camera interpolation and inter-frame interpolation. This selection is optimally conducted considering the segmentation results and the direction of the virtual view point. The experimental results revealed that the proposed method can acquire an appropriate texture. Consequently, the subjective quality of generated free-viewpoint video is successfully improved while maintaining the original motion property even for occluded objects.

DOI: 10.1109/ICIP.2013.6738427

5 Figures and Tables

Cite this paper

@article{Yamada2013OcclusionRF, title={Occlusion robust free-viewpoint video synthesis based on inter-camera/-frame interpolation}, author={Kentaro Yamada and Hiroshi Sankoh and Masaru Sugano and Sei Naito}, journal={2013 IEEE International Conference on Image Processing}, year={2013}, pages={2072-2076} }