Signal Processing for Implicit Neural Representations

@article{Xu2022SignalPF,
  title={Signal Processing for Implicit Neural Representations},
  author={Dejia Xu and Peihao Wang and Yifan Jiang and Zhiwen Fan and Zhangyang Wang},
  journal={ArXiv},
  year={2022},
  volume={abs/2210.08772}
}
Implicit Neural Representations (INRs) encoding continuous multi-media data via multi-layer perceptrons has shown undebatable promise in various computer vision tasks. Despite many successful applications, editing and processing an INR remains intractable as signals are represented by latent parameters of a neural network. Existing works manipulate such continuous representations via processing on their discretized instance, which breaks down the compactness and continuous nature of INR. In… 

Equivariant Architectures for Learning in Deep Weight Spaces

A novel network architecture for learning in deep weight spaces that takes as input a concatenation of weights and biases of a pre-trained MLP and processes it using a composition of layers that are equivariant to the natural permutation symmetry of the MLP’s weights.

RecolorNeRF: Layer Decomposed Radiance Field for Efficient Color Editing of 3D Scenes

RecolorNeRF is presented, a novel user-friendly color editing approach for the neural radiance fields that outperforms baseline methods both quantitatively and qualitatively for color editing even in complex real-world scenes.

References

SHOWING 1-10 OF 89 REFERENCES

Neural Implicit Dictionary Learning via Mixture-of-Expert Training

A generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID) from a data collection and representing INR as a functional combination of basis sampled from the dictionary is presented.

Meta-Learning Sparse Implicit Neural Representations

This work proposes to leverage a meta-learning approach in combination with network compression under a sparsity constraint, such that it renders a well-initialized sparse parameterization that evolves quickly to represent a set of unseen signals in the subsequent training.

Implicit Neural Representations with Periodic Activation Functions

This work proposes to leverage periodic activation functions for implicit neural representations and demonstrates that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.

Unified Implicit Neural Stylization

This work explores a new intriguing direction: training a stylized implicit representation, using a generalized approach that can apply to various 2D and 3D scenarios, and demonstrates that the learned representation is continuous not only spatially but also style-wise, leading to effortlessly interpolating between different styles and generating images with new mixed styles.

Implicit Geometric Regularization for Learning Shapes

It is observed that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level set surfaces, avoiding bad zero-loss solutions.

Convolutional Occupancy Networks

Convolutional Occupancy Networks is proposed, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes that enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.

Learned Initializations for Optimizing Coordinate-Based Neural Representations

Standard meta-learning algorithms are proposed to be applied to learn the initial weight parameters for fully-connected coordinate-based neural representations based on the underlying class of signals being represented, enabling faster convergence during optimization and resulting in better generalization when only partial observations of a given signal are available.

Implicit Neural Video Compression

The method, which is called implicit pixel flow (IPF), offers several simplifications over established neural video codecs: it does not require the receiver to have access to a pretrained neural network, does not use expensive interpolation-based warping operations, anddoes not require a separate training dataset.

NeRV: Neural Representations for Videos

A novel neural representation for videos (NeRV) which encodes videos in neural networks taking frame index as input, which can be used as a proxy for video compression, and achieve comparable performance to traditional frame-based video compression approaches.

AutoInt: Automatic Integration for Fast Neural Volume Rendering

This work proposes automatic integration, a new framework for learning efficient, closed-form solutions to integrals using coordinate-based neural networks, and improves a tradeoff between rendering speed and image quality by improving render times by greater than 10× with a tradeoffs of reduced image quality.
...