Corpus ID: 229680097

DeepSurfels: Learning Online Appearance Fusion

@inproceedings{Mihajlovi2021DeepSurfelsLO,
  title={DeepSurfels: Learning Online Appearance Fusion},
  author={Marko Mihajlovi{\'c} and Silvan Weder and Marc Pollefeys and Martin R. Oswald},
  booktitle={CVPR},
  year={2021}
}
We present DeepSurfels, a novel hybrid scene representation for geometry and appearance information. DeepSurfels combines explicit and neural building blocks to jointly encode geometry and appearance information. In contrast to established representations, DeepSurfels better represents high-frequency textures, is well-suited for online updates of appearance information, and can be easily combined with machine learning methods. We further present an end-to-end trainable online appearance fusion… Expand
MetaAvatar: Learning Animatable Clothed Human Models from Few Depth Images
TLDR
This paper proposes an approach that can quickly generate realistic clothed human avatars, represented as controllable neural SDFs, given only monocular depth images, and qualitatively and quantitatively shows that it outperforms state-of-the-art approaches that require complete meshes as inputs while the approach requires only depth frames as inputs and runs orders of magnitudes faster. Expand
LEAP: Learning Articulated Occupancy of People
TLDR
Experiments show that the canonicalized occupancy estimation with the learned LBS functions greatly improves the generalization capability of the learned occupancy representation across various human shapes and poses, outperforming existing solutions in all settings. Expand
LEAP : Learning Articulated Occupancy of People – Supplementary Material – A . Overview
    In this supplementary document, we first provide details about the proposed neural network modules (Sec. B) and their training procedure (Sec. C). Then, we present more qualitative and quantitativeExpand

    References

    SHOWING 1-10 OF 113 REFERENCES
    NeuralFusion: Online Depth Fusion in Latent Space
    TLDR
    This work presents a novel online depth map fusion approach that learns depth map aggregation in a latent feature space through a separation between the scene representation used for the fusion and the output scene representation, via an additional translator network. Expand
    DeepVoxels: Learning Persistent 3D Feature Embeddings
    TLDR
    This work proposes DeepVoxels, a learned representation that encodes the view-dependent appearance of a 3D scene without having to explicitly model its geometry, based on a Cartesian 3D grid of persistent embedded features that learn to make use of the underlying3D scene structure. Expand
    DeepView: View Synthesis With Learned Gradient Descent
    TLDR
    This work presents a novel approach to view synthesis using multiplane images (MPIs) that incorporates occlusion reasoning, improving performance on challenging scene features such as object boundaries, lighting reflections, thin structures, and scenes with high depth complexity. Expand
    RoutedFusion: Learning Real-Time Depth Map Fusion
    TLDR
    This work proposes a neural network that predicts non-linear updates to better account for typical fusion errors and outperforms the traditional fusion approach and related learned approaches on both synthetic and real data. Expand
    Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations
    TLDR
    The proposed Scene Representation Networks (SRNs), a continuous, 3D-structure-aware scene representation that encodes both geometry and appearance, are demonstrated by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model. Expand
    Differentiable Volumetric Rendering: Learning Implicit 3D Representations Without 3D Supervision
    TLDR
    This work proposes a differentiable rendering formulation for implicit shape and texture representations, showing that depth gradients can be derived analytically using the concept of implicit differentiation, and finds that this method can be used for multi-view 3D reconstruction, directly resulting in watertight meshes. Expand
    3D Appearance Super-Resolution With Deep Learning
    TLDR
    Experimental results demonstrate that the proposed networks successfully incorporate the 3D geometric information and super-resolve the texture maps. Expand
    Occupancy Networks: Learning 3D Reconstruction in Function Space
    TLDR
    This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input. Expand
    Texture Fields: Learning Texture Representations in Function Space
    TLDR
    Texture Fields, a novel texture representation which is based on regressing a continuous 3D function parameterized with a neural network is proposed, which is able to represent high frequency texture and naturally blend with modern deep learning techniques. Expand
    Image-guided Neural Object Rendering
    TLDR
    This work presents a novel method for photo-realistic re-rendering of reconstructed objects that combines the benefits of image-based rendering and GAN-based image synthesis, and proposes EffectsNet, a deep neural network that predicts view-dependent effects. Expand
    ...
    1
    2
    3
    4
    5
    ...