• Corpus ID: 238531688

Ensemble Neural Representation Networks

@article{Kadarvish2021EnsembleNR,
  title={Ensemble Neural Representation Networks},
  author={Milad Soltany Kadarvish and Hesam Mojtahedi and Hossein Entezari Zarch and A. Kazerouni and Alireza Morsali and Azra Abtahi and Farokh Marvasti},
  journal={ArXiv},
  year={2021},
  volume={abs/2110.04124}
}
Implicit Neural Representation (INR) has recently attracted considerable attention for storing various types of signals in continuous forms. The existing INR networks require lengthy training processes and high-performance computational resources. In this paper, we propose a novel sub-optimal ensemble architecture for INR that resolves the aforementioned problems. In this architecture, the representation task is divided into several sub-tasks done by independent subnetworks. We show that the… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 19 REFERENCES
On the Spectral Bias of Neural Networks
TLDR
This work shows that deep ReLU networks are biased towards low frequency functions, and studies the robustness of the frequency components with respect to parameter perturbation, to develop the intuition that the parameters must be finely tuned to express high frequency functions.
Implicit Neural Representations with Periodic Activation Functions
TLDR
This work proposes to leverage periodic activation functions for implicit neural representations and demonstrates that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.
Occupancy Networks: Learning 3D Reconstruction in Function Space
TLDR
This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input.
Implicit Surface Representations As Layers in Neural Networks
TLDR
This work proposes a novel formulation that permits the use of implicit representations of curves and surfaces, of arbitrary topology, as individual layers in Neural Network architectures with end-to-end trainability, and proposes to represent the output as an oriented level set of a continuous and discretised embedding function.
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
TLDR
An approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities is suggested.
Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks
TLDR
This work introduces a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrates that they are a strong candidate for unsupervised learning.
KiloNeRF: Speeding up Neural Radiance Fields with Thousands of Tiny MLPs
TLDR
It is demonstrated that real-time rendering is possible by utilizing thousands of tiny MLPs instead of one single large MLP, and using teacher-student distillation for training, this speed-up can be achieved without sacrificing visual quality.
SAL: Sign Agnostic Learning of Shapes From Raw Data
  • Matan Atzmon, Y. Lipman
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
TLDR
This paper introduces Sign Agnostic Learning (SAL), a deep learning approach for learning implicit shape representations directly from raw, unsigned geometric data, such as point clouds and triangle soups, and believes it opens the door to many geometric deep learning applications with real-world data.
Local Implicit Grid Representations for 3D Scenes
TLDR
This paper introduces Local Implicit Grid Representations, a new 3D shape representation designed for scalability and generality and demonstrates the value of this proposed approach for 3D surface reconstruction from sparse point observations, showing significantly better results than alternative approaches.
DeepVoxels: Learning Persistent 3D Feature Embeddings
TLDR
This work proposes DeepVoxels, a learned representation that encodes the view-dependent appearance of a 3D scene without having to explicitly model its geometry, based on a Cartesian 3D grid of persistent embedded features that learn to make use of the underlying3D scene structure.
...
1
2
...