Depth superresolution using motion adaptive regularization

@article{Kamilov2016DepthSU,
  title={Depth superresolution using motion adaptive regularization},
  author={Ulugbek S. Kamilov and Petros T. Boufounos},
  journal={2016 IEEE International Conference on Multimedia \& Expo Workshops (ICMEW)},
  year={2016},
  pages={1-6}
}
  • U. Kamilov, P. Boufounos
  • Published 4 March 2016
  • Computer Science
  • 2016 IEEE International Conference on Multimedia & Expo Workshops (ICMEW)
Spatial resolution of depth sensors is often significantly lower compared to that of conventional optical cameras. Recent work has explored the idea of improving the resolution of depth using higher resolution intensity as a side information. In this paper, we demonstrate that further incorporating temporal information in videos can significantly improve the results. In particular, we propose a novel approach that improves depth resolution, exploiting the space-time redundancy in the depth and… 

Figures and Tables from this paper

Motion-Adaptive Depth Superresolution

A new formulation is proposed that is able to incorporate temporal information and exploit the motion of objects in the video to significantly improve the results over existing methods, and exploits the space-time redundancy in the depth and intensity using motion-adaptive low-rank regularization.

Image-guided ToF depth upsampling: a survey

This paper reviews the approaches that couple ToF depth images with high-resolution optical images and provides an overview of performance evaluation tests presented in the related studies.

CONVOLUTIONAL DICTIONARY LEARNING FOR MULTIMODAL IMAGING

This paper proposes a new method that reconstructs multimodal images from their linear measurements by exploiting redundancies across different modalities, and develops an online algorithm that enables the unsupervised learning of convolutional dictionaries on large-scale datasets that are typical in such applications.

Online convolutional dictionary learning for multimodal imaging

This paper proposes a new method that reconstructs multimodal images from their linear measurements by exploiting redundancies across different modalities, and develops an online algorithm that enables the unsupervised learning of convolutional dictionaries on large-scale datasets that are typical in such applications.

Multiple Image Arrangement for Subjective Quality Assessment

The research indicates that isometric arrangement imposes less duress on participants and has more uniform distribution of eye fixations and movements and therefore is expected to generate more reliable subjective ratings.

Methods for solving regularized inverse problems : from non-Euclidean fidelities to computational imaging applications

This thesis presents three main contributions to the study of inverse problems: the forward model, the prior or regularization, the data fidelity, and the recovery method, which focuses on recovering a key structural property of a sparse signal, its support.

References

SHOWING 1-10 OF 31 REFERENCES

Joint Geodesic Upsampling of Depth Images

A novel approximation algorithm is developed whose complexity grows linearly with the image size and achieve realtime performance and is well suited for upsampling depth images using binary edge maps, an important sensor fusion application.

Image Guided Depth Upsampling Using Anisotropic Total Generalized Variation

This work formulate a convex optimization problem using higher order regularization for depth image up sampling, and derives a numerical algorithm based on a primal-dual formulation that is efficiently parallelized and runs at multiple frames per second.

A Noise‐aware Filter for Real‐time Depth Upsampling

This work presents an adaptive multi-lateral upsampling filter that takes into account the inherent noisy nature of real-time depth data and can greatly improve reconstruction quality, boost the resolution of the data to that of the video sensor, and prevent unwanted artifacts like texture copy into geometry.

High quality depth map upsampling for 3D-TOF cameras

This paper describes an application framework to perform high quality upsampling on depth maps captured from a low-resolution and noisy 3D time-of-flight (3D-ToF) camera that has been coupled with a

Upsampling range data in dynamic environments

This work presents a flexible method for fusing information from optical and range sensors based on an accelerated high-dimensional filtering approach, and describes how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs.

LidarBoost: Depth superresolution for ToF 3D shape scanning

LidarBoost is presented, a 3D depth superresolution method that combines several low resolution noisy depth images of a static scene from slightly displaced viewpoints, and merges them into a high-resolution depth image.

Guided Depth Upsampling via a Cosparse Analysis Model

A new approach to upsample depth maps when aligned high-resolution color images are given, which exploits the cosparsity of analytic analysis operators performed on a depth map, together with data fidelity and color guided smoothness constraints for upsampling.

Fusion of range and color images for denoising and resolution enhancement with a non-local filter

Spatial-Depth Super Resolution for Range Images

We present a new post-processing step to enhance the resolution of range images. Using one or two registered and potentially high-resolution color images as reference, we iteratively refine the input

Joint bilateral upsampling

It is demonstrated that in cases, such as those above, the available high resolution input image may be leveraged as a prior in the context of a joint bilateral upsampling procedure to produce a better high resolution solution.