Mononizing binocular videos

  title={Mononizing binocular videos},
  author={Wenbo Hu and Menghan Xia and Chi-Wing Fu and Tien-Tsin Wong},
  journal={ACM Transactions on Graphics (TOG)},
  pages={1 - 16}
This paper presents the idea of mono-nizing binocular videos and a framework to effectively realize it. Mono-nize means we purposely convert a binocular video into a regular monocular video with the stereo information implicitly encoded in a visual but nearly-imperceptible form. Hence, we can impartially distribute and show the mononized video as an ordinary monocular video. Unlike ordinary monocular videos, we can restore from it the original binocular video and show it on a stereoscopic… 

IICNet: A Generic Framework for Reversible Image Conversion

This work develops Invertible Image Conversion Net (IIC-Net) as a generic solution to various RIC tasks due to its strong capacity and task-independent design and maintains a highly invertible structure based on invertable neural networks (INNs) to better preserve the information during conversion.

Enhance Convolutional Neural Networks with Noise Incentive Block

Noise Incentive Block (NIB) is proposed, which serves as a generic plug-in for any CNN generation model, which perturbs the input data symmetrically with a noise map and reassembles them in the feature domain as driven by the objective function.

Point Set Self-Embedding

This work presents an innovative method for point set self-embedding, that encodes the structural information of a dense point set into its sparser version in a visual but imperceptible form, and can leverage the embedded information to fully restore the original point set for detailed analysis on remote servers.

Scale-arbitrary Invertible Image Downscaling

This method can downscale the input HR image to the low-resolution (LR) images with the HR information embeded in a nearly-imperceptible form and faithfully restore the HR image with original resolution.

Graph-based approach for enumerating floorplans based on users specifications

Abstract This paper aims at automatically generating dimensioned floorplans while considering constraints given by the users in the form of adjacency and connectivity graph. The obtained floorplans

Embedding Novel Views in a Single JPEG Image

The results show that the proposed method can restore high-fidelity novel views from a slightly modified JPEG image and is robust to JPEG compression, color adjusting, and cropping.



Nonlinear disparity mapping for stereoscopic 3D

The most important perceptual aspects of stereo vision are discussed and their implications for stereoscopic content creation are formalized into a set of basic disparity mapping operators that enable us to control and retarget the depth of a stereoscopic scene in a nonlinear and locally adaptive fashion.

Hiding of phase-based stereo disparity for ghost-free viewing without glasses

A novel method to synthesize ghost-free stereoscopic images by means of light projection of the disparity-inducer components onto the object's surface and can alter the depth impression of a real object without its being noticed by naked-eye viewers.

Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion

This work proposes a system to infer binocular disparity from a monocular video stream in real-time, that is numerically inaccurate, but results in a very similar overall depth impression with plausible overall layout, sharp edges, fine details and agreement between luminance and disparity.

Overview of the Stereo and Multiview Video Coding Extensions of the H.264/MPEG-4 AVC Standard

An overview of the algorithmic design used for extending H.264/MPEG-4 AVC towards MVC is provided and a summary of the coding performance achieved by MVC for both stereo- and multiview video is provided.

3DTV at home

This work proposes a real-time system that can convert stereoscopic video to a high-quality multiview video that can be directly fed to automultiscopic displays and analyzes the visual quality and robustness of the technique on a number of synthetic and real-world examples.

Overview of the Multiview and 3D Extensions of High Efficiency Video Coding

The more advanced 3D video extension, 3D-HEVC, targets a coded representation consisting of multiple views and associated depth maps, as required for generating additional intermediate views inAdvanced 3D displays.

Content-Based Scalable Multi-View Video Coding Using 4D Wavelet

The peak signal to noise ratio (PSNR) within ROI can be improved by 2~4dB and the subjective visual quality of ROI in the proposed scheme can be better compared to the conventional SMVC algorithm.

Stereo magnification

This paper explores an intriguing scenario for view synthesis: extrapolating views from imagery captured by narrow-baseline stereo cameras, including VR cameras and now-widespread dual-lens camera phones, and proposes a learning framework that leverages a new layered representation that is called multiplane images (MPIs).

Depth-Assisted Full Resolution Network for Single Image-Based View Synthesis

A full resolution network to extract fine-scale image features, which contributes to prevent blurry artifacts and a synthesis layer is used to not only warp the observed pixels to the desired positions but also hallucinate the missing pixels from other recorded pixels.

Deep view synthesis from sparse photometric images

This paper synthesizes novel viewpoints across a wide range of viewing directions (covering a 60° cone) from a sparse set of just six viewing directions, based on a deep convolutional network trained to directly synthesize new views from the six input views.