Revisiting Light Field Rendering with Deep Anti-Aliasing Neural Network

@article{Wu2021RevisitingLF,
  title={Revisiting Light Field Rendering with Deep Anti-Aliasing Neural Network},
  author={Gaochang Wu and Yebin Liu and Lu Fang and Tianyou Chai},
  journal={IEEE transactions on pattern analysis and machine intelligence},
  year={2021},
  volume={PP}
}
The light field (LF) reconstruction is mainly confronted with two challenges, large disparity and non-Lambertian effect. Typical approaches either address the large disparity challenge using depth estimation followed by view synthesis or eschew explicit depth information to enable non-Lambertian rendering, but rarely solve both challenges in a unified framework. In this paper, we revisit the classic LF rendering framework to address both challenges by incorporating it with deep learning… 
Light Field Neural Rendering
TLDR
A two-stage transformer-based model that aggregates features along epipolar lines, then aggregates Features along reference views to produce the color of a target ray in order to represent view-dependent effects accurately.
End-to-End Residual Network for Light Field Reconstruction on Raw Images and View Image Stacks
TLDR
Experimental findings on real-world datasets show that the proposed learning-based solution to reconstructing dense, high-quality LF images has excellent performance and superiority over state-of-the-art approaches.
NeLF: Practical Novel View Synthesis with Neural Light Field
TLDR
This method can render novel views by sampling rays and querying the color for each ray from the network directly; thus enabling fast light field rendering with a very sparse set of input images.
Disentangling Light Fields for Super-Resolution and Disparity Estimation
TLDR
This paper first design a class of domain-specific convolutions to disentangle LFs from different dimensions, and then leverage these disentangled features by designing task-specific modules, which demonstrates the effectiveness, efficiency, and generality of the disentangling mechanism.
Content-aware Warping for View Synthesis
TLDR
A new end-to-end learning-based framework for novel view synthesis from two input source views, in which two additional modules are naturally proposed to handle the occlusion issue and capture the spatial correlation among pixels of the synthesized view, respectively.
Light Field Reconstruction Using Residual Networks on Raw Images
TLDR
This paper proposes a learning-based method to reconstruct densely sampled LF images from a sparse set of input images, trained with raw LF images rather than using multiple images of the same scene to restore more texture details and provide better quality.
Spatial-Angular Attention Network for Light Field Reconstruction
TLDR
A spatial-angular attention network is proposed to perceive non-local correspondences in the light field, and reconstruct high angular resolution light field in an end-to-end manner with superior performance against sparsely-sampled light fields with Non-Lambertian effects.
NeuLF: Efficient Novel View Synthesis with Neural 4D Light Field
TLDR
This method can render novel views by sampling rays and querying the color for each ray from the network directly, thus enabling high-quality light field rendering with a sparser set of training images and enabling applications such as auto refocus.

References

SHOWING 1-10 OF 57 REFERENCES
Fast Light Field Reconstruction with Deep Coarse-to-Fine Modeling of Spatial-Angular Clues
TLDR
A learning based algorithm to reconstruct a densely-sampling LF fast and accurately from a sparsely-sampled LF in one forward pass, which can provide more than 3 dB advantage in reconstruction quality in average than the state-of-the-art methods while being computationally faster by a factor of 30.
The Light Field Camera: Extended Depth of Field, Aliasing, and Superresolution
TLDR
This work addresses traditional multiview stereo methods to the extracted low-resolution views can result in reconstruction errors due to aliasing, and incorporates Lambertian and texture preserving priors to reconstruct both scene depth and its superresolved texture in a variational Bayesian framework.
Learning-based view synthesis for light field cameras
TLDR
This paper proposes a novel learning-based approach to synthesize new views from a sparse set of input views that could potentially decrease the required angular resolution of consumer light field cameras, which allows their spatial resolution to increase.
Light Field Super-Resolution Using a Low-Rank Prior and Deep Convolutional Neural Networks
TLDR
A learning-based spatial light field super-resolution method that allows the restoration of the entire light field with consistency across all angular views and is shown to be further improved using iterative back-projection as a post-processing step.
Light Field Reconstruction Using Convolutional Network on EPI and Extended Applications
In this paper, a novel convolutional neural network (CNN)-based framework is developed for light field reconstruction from a sparse set of views. We indicate that the reconstruction can be
Light Field Intrinsics with a Deep Encoder-Decoder Network
TLDR
A fully convolutional autoencoder for light fields, which jointly encodes stacks of horizontal and vertical epipolar plane images through a deep network of residual layers, and yields good results on previously unseen real world data captured by a Lytro Illum camera and various gantries.
Learning a Deep Convolutional Network for Light-Field Image Super-Resolution
TLDR
A novel method for Light-Field image super-resolution (SR) via a deep convolutional neural network using a datadriven learning method to simultaneously up-sample the angular resolution as well as the spatial resolution of a Light- field image.
Stereo Magnification: Learning View Synthesis using Multiplane Images
TLDR
This paper explores an intriguing scenario for view synthesis: extrapolating views from imagery captured by narrow-baseline stereo cameras, including VR cameras and now-widespread dual-lens camera phones, and proposes a learning framework that leverages a new layered representation that is called multiplane images (MPIs).
Learning Sheared EPI Structure for Light Field Reconstruction
TLDR
This paper presents a learning-based light field reconstruction approach by fusing a set of sheared epipolar plane images (EPIs), showing that a patch in a sheared EPI will exhibit a clear structure when the sheared value equals the depth of that patch.
Variational Light Field Analysis for Disparity Estimation and Super-Resolution
TLDR
The problem of view synthesis is formulated as a continuous inverse problem, which allows us to correctly take into account foreshortening effects caused by scene geometry transformations, and all optimization problems are solved with state-of-the-art convex relaxation techniques.
...
1
2
3
4
5
...