Accurate Light Field Depth Estimation With Superpixel Regularization Over Partially Occluded Regions

@article{Chen2018AccurateLF,
  title={Accurate Light Field Depth Estimation With Superpixel Regularization Over Partially Occluded Regions},
  author={Jie Chen and Junhui Hou and Yun Ni and Lap-Pui Chau},
  journal={IEEE Transactions on Image Processing},
  year={2018},
  volume={27},
  pages={4889-4900}
}
Depth estimation is a fundamental problem for light field photography applications. Numerous methods have been proposed in recent years, which either focus on crafting cost terms for more robust matching, or on analyzing the geometry of scene structures embedded in the epipolar-plane images. Significant improvements have been made in terms of overall depth estimation error; however, current state-of-the-art methods still show limitations in handling intricate occluding structures and complex… Expand
Fast and Accurate Depth Estimation From Sparse Light Fields
TLDR
It is demonstrated that a few refinement iterations result in globally consistent dense depth maps even in the presence of wide textureless regions and occlusions, which is comparable with the state-of-the-art results. Expand
Toward Real-World Light Field Depth Estimation: A Noise-Aware Paradigm Using Multi-Stereo Disparity Integration
TLDR
A noise-aware light field depth estimation algorithm which is insensitive to noise is proposed, and both quantitative and qualitative results have demonstrated the superiority and robustness of the method. Expand
Accurate Light Field Depth Estimation via an Occlusion-Aware Network
TLDR
This paper proposes an occlusion-aware network, which is capable of estimating accurate depth maps with sharp edges, and achieves better performance on 4D light field benchmark, especially in Occlusion regions, when compared with current state-of-the-art light-field depth estimation algorithms. Expand
Occlusion-aware Unsupervised Learning of Depth from 4-D Light Fields
TLDR
Experimental results on synthetic data show that the proposed unsupervised learning-based depth estimation method can significantly shrink the performance gap between the previous unsuper supervised method and supervised ones, and produce depth maps with comparable accuracy to traditional methods with obviously reduced computational cost. Expand
View-consistent 4D Light Field Depth Estimation
TLDR
This work proposes a method to compute depth maps for every sub-aperture image in a light field in a view consistent way, and achieves competitive quantitative metrics and qualitative performance on both synthetic and real-world light fields. Expand
Unsupervised Dense Light Field Reconstruction with Occlusion Awareness
TLDR
An unsupervised learning method for LF‐oriented view synthesis, which provides a simple solution for generating quality light fields from a sparse set of views using per‐view disparity as a geometry proxy to warp input views to novel views. Expand
4D Light Field Superpixel and Segmentation
TLDR
The essential element of image pixel, i.e., rays in light space, is considered, and the light field superpixel (LFSP) is proposed to eliminate the ambiguity and the full-sliced property of the proposed LFSP algorithm is verified by comparing it with the classical supervoxel algorithms. Expand
Light Field Reconstruction Using Dynamically Generated Filters
TLDR
A novel learning-based light field reconstruction approach to increase the angular resolution of a sparsely-sample light field image by using a deep neural network to estimate the filtering kernels for each sub-aperture image. Expand
Differentiable Diffusion for Dense Depth Estimation from Multi-view Images
We present a method to estimate dense depth by optimizing a sparse set of points such that their diffusion into a depth map minimizes a multi-view reprojection error from RGB supervision. We optimizeExpand
Edge-aware Bidirectional Diffusion for Dense Depth Estimation from Light Fields
TLDR
An algorithm to estimate fast and accurate depth maps from light fields via a sparse set of depth edges and gradients based around the idea that true depth edges are more sensitive than texture edges to local constraints, and so they can be reliably disambiguated through a bidirectional diffusion process. Expand
...
1
2
3
4
5
...

References

SHOWING 1-10 OF 43 REFERENCES
Depth Estimation with Occlusion Modeling Using Light-Field Cameras
TLDR
An occlusion-aware depth estimation algorithm is developed and it can be shown that although photo-consistency is not preserved for pixels at occlusions, it still holds in approximately half the viewpoints. Expand
Light-Field Depth Estimation via Epipolar Plane Image Analysis and Locally Linear Embedding
TLDR
A novel method for 4D light-field (LF) depth estimation exploiting the special linear structure of an epipolar plane image (EPI) and locally linear embedding (LLE) based on a local reliability measure to achieve higher performance than the typical and recent state-of-the-art LF stereo matching methods. Expand
Robust Light Field Depth Estimation for Noisy Scene with Occlusion
  • Williem, I. Park
  • Computer Science, Mathematics
  • 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
TLDR
The proposed method is more robust to occlusion and less sensitive to noise, and outperforms the state-of-the-art light field depth estimation methods in qualitative and quantitative evaluation. Expand
Accurate depth map estimation from a lenslet light field camera
TLDR
This paper introduces an algorithm that accurately estimates depth maps using a lenslet light field camera and estimates the multi-view stereo correspondences with sub-pixel accuracy using the cost volume using the phase shift theorem. Expand
The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution
  • Tom E. Bishop, P. Favaro
  • Computer Science, Medicine
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2012
TLDR
This work addresses traditional multiview stereo methods to the extracted low-resolution views can result in reconstruction errors due to aliasing, and incorporates Lambertian and texture preserving priors to reconstruct both scene depth and its superresolved texture in a variational Bayesian framework. Expand
Globally consistent depth labeling of 4D light fields
We present a novel paradigm to deal with depth reconstruction from 4D light fields in a variational framework. Taking into account the special structure of light field data, we reformulate theExpand
Robust and Dense Depth Estimation for Light Field Images
TLDR
This work proposes a disparity interpolation method increasing the density and improving the accuracy of this initial estimate of disparity maps between specific pairs of views, and proposes a depth estimation method for light field images. Expand
Scene reconstruction from high spatio-angular resolution light fields
TLDR
This paper proposes an algorithm that leverages coherence in massive light fields by breaking with a number of established practices in image-based reconstruction, and introduces a sparse representation and a propagation scheme for reliable depth estimates which make the algorithm particularly effective for 3D input. Expand
Depth Estimation and Specular Removal for Glossy Surfaces Using Point and Line Consistency with Light-Field Cameras
TLDR
A novel theory of the relationship between light-field data and reflectance from the dichromatic model is presented, and a new photo consistency metric, line-consistency, which represents how viewpoint changes affect specular points is presented. Expand
Occlusion-Model Guided Antiocclusion Depth Estimation in Light Field
  • Hao Zhu, Qing Wang, J. Yu
  • Mathematics, Computer Science
  • IEEE Journal of Selected Topics in Signal Processing
  • 2017
TLDR
The complete occlusion model in light field is explored and the occluder-consistency between the spatial and angular spaces is derived, which is used as a guidance to select unoccluded views for each candidate Occlusion point. Expand
...
1
2
3
4
5
...