3D omnistereo panorama generation from a monocular video sequence

  • Ning, Zhiyu
  • Published 2006


An interesting part in computer vision is to generate 3D video from traditional 2D video. To achieve this, first we need to understand how to use a single monocular camera to generate a 3D scene sensation. We call this 3D scene ‘stereo panorama’. In paper[1], Shmuel Peleg and his fellows proposed a new approach to generate a stereo panorama by using only a single video camera rotate about an axis behind its lens based on the principle of X-slit camera in paper[2]. Specifically, the stereo panorama images is obtained by pasting together strips taken from each image in the video sequence. However, they did not specify how to choose the specific strip width and the location of each strip, how the parameters in use will affect the 3D sensation. And they didn’t give a benchmark on how to recognize the stereo image pair’s quality. The strip width chosen is obviously related with the camera rotating speed. And the location of each strip is related with disparity. However, in this report, I will use a ‘Virtual speed’ rather than actual speed of camera to determine the strip width and present the relationship between visual speed and actual speed. And analyze the location of each strip in accordance with the actual 3D sensation of the stereo panorama images. Also, I will give my benchmark on how to distinguish the stereo image’s quality. Keyword: computer vision, stereo panorama, X-slit camera, parameters, disparity

View Slides

39 Figures and Tables

Cite this paper

@inproceedings{Ning20063DOP, title={3D omnistereo panorama generation from a monocular video sequence}, author={Ning and Zhiyu}, year={2006} }