• Corpus ID: 219720969

MetaSDF: Meta-learning Signed Distance Functions

@article{Sitzmann2020MetaSDFMS,
  title={MetaSDF: Meta-learning Signed Distance Functions},
  author={Vincent Sitzmann and Eric Chan and Richard Tucker and Noah Snavely and Gordon Wetzstein},
  journal={ArXiv},
  year={2020},
  volume={abs/2006.09662}
}
Neural implicit shape representations are an emerging paradigm that offers many potential benefits over conventional discrete representations, including memory efficiency at a high spatial resolution. Generalizing across shapes with such neural implicit representations amounts to learning priors over the respective function space and enables geometry reconstruction from partial or noisy observations. Existing generalization methods rely on conditioning a neural network on a low-dimensional… 

Figures and Tables from this paper

Meta-Learning Sparse Implicit Neural Representations
TLDR
This work proposes to leverage a meta-learning approach in combination with network compression under a sparsity constraint, such that it renders a well-initialized sparse parameterization that evolves quickly to represent a set of unseen signals in the subsequent training.
Metappearance: Meta-Learning for Visual Appearance Reproduction
TLDR
This work suggests to combine both techniques end-to-end using meta-learning: over-fit onto a single problem instance in an inner loop, while also learning how to do so efficiently in an outer-loop that builds intuition over many optimization runs.
Object Pursuit: Building a Space of Objects via Discriminative Weight Generation
TLDR
To mitigate the annotation burden and relax the constraints on the statistical com-plexity of the data, the method leverages interactions to effectively sample diverse variations of an object and the corresponding training signals while learning the object-centric representations.
VIA DISCRIMINATIVE WEIGHT GENERATION
TLDR
This work proposes a framework to continuously learn object-centric representations for visual learning and understanding that can improve label efficiency in downstream tasks and performs an extensive study of the key features of the proposed framework and analyze the characteristics of the learned representations.
Learned Initializations for Optimizing Coordinate-Based Neural Representations
TLDR
Standard meta-learning algorithms are proposed to be applied to learn the initial weight parameters for fully-connected coordinate-based neural representations based on the underlying class of signals being represented, enabling faster convergence during optimization and resulting in better generalization when only partial observations of a given signal are available.
Meta-Learning Sparse Compression Networks
TLDR
This paper introduces the first method allowing for sparsification to be employed in the inner-loop of commonly used Meta-Learning algorithms, drastically improving both compression and the computational cost of learning INRs.
Learning Signal-Agnostic Manifolds of Neural Fields
TLDR
This model — dubbed GEM — learns to capture the underlying structure of datasets across modalities in image, shape, audio and cross-modal audiovisual domains in a modality-independent manner and shows that by walking across the underlying manifold of GEM, the model may generate new samples in the signal domains.
Mending Neural Implicit Modeling for 3D Vehicle Reconstruction in the Wild
TLDR
This work demonstrates high-quality in-the-wild shape reconstruction using a deep encoder as a robust-initializer of the shape latent-code, a deep discriminator as a learned high-dimensional shape prior, and a novel curriculum learning strategy that allows the model to learn shape priors on synthetic data and smoothly transfer them to sparse real world data.
Towards Generalising Neural Implicit Representations
TLDR
This work shows that training neural representations for reconstruction tasks alongside conventional tasks can produce more general encodings that admit equal quality reconstructions to single task training, whilst improving results on conventional tasks when compared to single Task Encodings.
Unified Implicit Neural Stylization
TLDR
This work explores a new intriguing direction: training a stylized implicit representation, using a generalized approach that can apply to various 2D and 3D scenarios, and demonstrates that the learned representation is continuous not only spatially but also style-wise, leading to effortlessly interpolating between different styles and generating images with new mixed styles.
...
...

References

SHOWING 1-10 OF 50 REFERENCES
MetaFun: Meta-Learning with Iterative Functional Updates
TLDR
This approach is the first to demonstrates the success of encoder-decoder style meta-learning methods like conditional neural processes on large-scale few-shot classification benchmarks such as miniImageNet and tieredImageNet, where it achieves state-of-the-art performance.
Convolutional Occupancy Networks
TLDR
Convolutional Occupancy Networks is proposed, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes that enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
Implicit Surface Representations As Layers in Neural Networks
TLDR
This work proposes a novel formulation that permits the use of implicit representations of curves and surfaces, of arbitrary topology, as individual layers in Neural Network architectures with end-to-end trainability, and proposes to represent the output as an oriented level set of a continuous and discretised embedding function.
Multi-task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics
TLDR
A principled approach to multi-task deep learning is proposed which weighs multiple loss functions by considering the homoscedastic uncertainty of each task, allowing us to simultaneously learn various quantities with different units or scales in both classification and regression settings.
Meta-Learning with Latent Embedding Optimization
TLDR
This work shows that latent embedding optimization can achieve state-of-the-art performance on the competitive miniImageNet and tieredImageNet few-shot classification tasks, and indicates LEO is able to capture uncertainty in the data, and can perform adaptation more effectively by optimizing in latent space.
Learning Implicit Fields for Generative Shape Modeling
  • Zhiqin Chen, Hao Zhang
  • Computer Science
    2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
By replacing conventional decoders by the implicit decoder for representation learning and shape generation, this work demonstrates superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality.
Occupancy Networks: Learning 3D Reconstruction in Function Space
TLDR
This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input.
Implicit Geometric Regularization for Learning Shapes
TLDR
It is observed that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level set surfaces, avoiding bad zero-loss solutions.
Semantic Implicit Neural Scene Representations With Semi-Supervised Training
TLDR
This work demonstrates that an existing implicit representation (SRNs) is actually multi-modal; it can be further leveraged to perform per-point semantic segmentation while retaining its ability to represent appearance and geometry and utilizes a semi-supervised learning strategy atop the existing pre-trained scene representation.
SAL: Sign Agnostic Learning of Shapes From Raw Data
  • Matan Atzmon, Y. Lipman
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
This paper introduces Sign Agnostic Learning (SAL), a deep learning approach for learning implicit shape representations directly from raw, unsigned geometric data, such as point clouds and triangle soups, and believes it opens the door to many geometric deep learning applications with real-world data.
...
...