Corpus ID: 233181738

Modulated Periodic Activations for Generalizable Local Functional Representations

@article{Mehta2021ModulatedPA,
  title={Modulated Periodic Activations for Generalizable Local Functional Representations},
  author={Ishit Mehta and Micha{\"e}l Gharbi and Connelly Barnes and Eli Shechtman and Ravi Ramamoorthi and Manmohan Chandraker},
  journal={ArXiv},
  year={2021},
  volume={abs/2104.03960}
}
Multi-Layer Perceptrons (MLPs) make powerful functional representations for sampling and reconstruction problems involving low-dimensional signals like images, shapes and light fields. Recent works have significantly improved their ability to represent high-frequency content by using periodic activations or positional encodings. This often came at the expense of generalization: modern methods are typically optimized for a single signal. We present a new representation that generalizes to… Expand
3 Citations
Ensemble Neural Representation Networks
TLDR
It is shown that the performance of the proposed ensemble INR architecture may decrease if the dimensions of sub-networks increase, so it is vital to suggest an optimization algorithm to find the sub-optimal structure of the ensemble network, which is done in this paper. Expand
Fast Training of Neural Lumigraph Representations using Meta Learning
TLDR
This work develops a new neural rendering approach with the goal of quickly learning a high-quality representation which can also be rendered in real-time, and achieves similar or better novel view synthesis results in a fraction of the time that competing methods require. Expand
Multi-Head ReLU Implicit Neural Representation Networks
TLDR
A novel multi-head multi-layer perceptron (MLP) structure is presented for implicit neural representation (INR) and it is shown that the proposed model does not suffer from the special bias of conventional ReLU networks and has superior generalization capabilities. Expand

References

SHOWING 1-10 OF 49 REFERENCES
Video Enhancement with Task-Oriented Flow
TLDR
T task-oriented flow (TOFlow), a motion representation learned in a self-supervised, task-specific manner, is proposed, which outperforms traditional optical flow on standard benchmarks as well as the Vimeo-90K dataset in three video processing tasks. Expand
Implicit Neural Representations with Periodic Activation Functions
TLDR
This work proposes to leverage periodic activation functions for implicit neural representations and demonstrates that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives. Expand
DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation
TLDR
This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data. Expand
Deep Learning Face Attributes in the Wild
TLDR
A novel deep learning framework for attribute prediction in the wild that cascades two CNNs, LNet and ANet, which are fine-tuned jointly with attribute tags, but pre-trained differently. Expand
Learning Implicit Fields for Generative Shape Modeling
  • Zhiqin Chen, Hao Zhang
  • Computer Science
  • 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2019
TLDR
By replacing conventional decoders by the implicit decoder for representation learning and shape generation, this work demonstrates superior results for tasks such as generative shape modeling, interpolation, and single-view 3D reconstruction, particularly in terms of visual quality. Expand
ShapeNet: An Information-Rich 3D Model Repository
TLDR
ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy, a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations. Expand
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
TLDR
An approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities is suggested. Expand
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
TLDR
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis. Expand
DIST: Rendering Deep Implicit Signed Distance Function With Differentiable Sphere Tracing
TLDR
This work proposes a differentiable sphere tracing algorithm that can effectively reconstruct accurate 3D shapes from various inputs, such as sparse depth and multi-view images, through inverse optimization and shows excellent generalization capability and robustness against various noises. Expand
Deep Local Shapes: Learning Local SDF Priors for Detailed 3D Reconstruction
TLDR
This work introduces Deep Local Shapes (DeepLS), a deep shape representation that enables encoding and reconstruction of high-quality 3D shapes without prohibitive memory requirements, and demonstrates the effectiveness and generalization power of this representation. Expand
...
1
2
3
4
5
...