Random Weight Factorization Improves the Training of Continuous Neural Representations

@article{Wang2022RandomWF,
  title={Random Weight Factorization Improves the Training of Continuous Neural Representations},
  author={Sifan Wang and Hanwen Wang and Jacob H. Seidman and Paris Perdikaris},
  journal={ArXiv},
  year={2022},
  volume={abs/2210.01274}
}
Continuous neural representations have recently emerged as a powerful and flex-ible alternative to classical discretized representations of signals. However, training them to capture fine details in multi-scale signals is difficult and computa-tionally expensive. Here we propose random weight factorization as a simple drop-in replacement for parameterizing and initializing conventional linear layers in coordinate-based multi-layer perceptrons (MLPs) that significantly accelerates and improves their… 

References

SHOWING 1-10 OF 49 REFERENCES

Implicit Neural Representations with Periodic Activation Functions

This work proposes to leverage periodic activation functions for implicit neural representations and demonstrates that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.

Neural tangent kernel: convergence and generalization in neural networks (invited paper)

This talk will introduce this formalism and give a number of results on the Neural Tangent Kernel and explain how they give us insight into the dynamics of neural networks during training and into their generalization features.

Weight Normalization: A Simple Reparameterization to Accelerate Training of Deep Neural Networks

A reparameterization of the weight vectors in a neural network that decouples the length of those weight vectors from their direction is presented, improving the conditioning of the optimization problem and speeding up convergence of stochastic gradient descent.

Adam: A Method for Stochastic Optimization

This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.

JAX: composable transformations of Python+NumPy programs, 2018

  • URL http: //github.com/google/jax
  • 2018

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

An approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities is suggested.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.

Flax: A neural network library and ecosystem for JAX, 2020

  • URL http://github.com/google/flax
  • 2020

On the Spectral Bias of Neural Networks

This work shows that deep ReLU networks are biased towards low frequency functions, and studies the robustness of the frequency components with respect to parameter perturbation, to develop the intuition that the parameters must be finely tuned to express high frequency functions.