Generalised Implicit Neural Representations

  title={Generalised Implicit Neural Representations},
  author={Daniele Grattarola and Pierre Vandergheynst},
We consider the problem of learning implicit neural representations (INRs) for signals on non-Euclidean domains. In the Euclidean case, INRs are trained on a discrete sampling of a signal over a regular lattice. Here, we assume that the continuous signal exists on some unknown topological space from which we sample a discrete graph. In the absence of a coordinate system to identify the sampled nodes, we propose approximating their location with a spectral embedding of the graph. This allows us… 



Seeing Implicit Neural Representations as Fourier Series

This work analyzes the connection between the two methods and shows that a Fourier mapped perceptron is structurally like one hidden layer SIREN and identifies the relationship between the previously proposed Fourier mapping and the general d-dimensional Fourier series, leading to an integer lattice mapping.

Geometric Deep Learning: Going beyond Euclidean data

Deep neural networks are used for solving a broad range of problems from computer vision, natural-language processing, and audio analysis where the invariances of these structures are built into networks used to model them.

Implicit Geometric Regularization for Learning Shapes

It is observed that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level set surfaces, avoiding bad zero-loss solutions.

Phase Transitions, Distance Functions, and Implicit Neural Representations

Inspiration from the theory of phase transitions of fluids is drawn and a loss for training INRs is suggested that learns a density function that converges to a proper occupancy function, while its log transform converging to a distance function.

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

An approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities is suggested.

Transferability of Spectral Graph Convolutional Neural Networks

It is shown that if two graphs discretize the same continuous metric space, then a spectral filter/ConvNet has approximately the same repercussion on both graphs, which is more permissive than the standard analysis.

Sign and Basis Invariant Networks for Spectral Graph Representation Learning

SignNet and BasisNet are introduced — new neural architectures that are invariant to all requisite symmetries and hence process collections of eigenspaces in a principled manner and can approximate any continuous function of eigenvectors with the proper invariances.

Laplacian Eigenmaps for Dimensionality Reduction and Data Representation

This work proposes a geometrically motivated algorithm for representing the high-dimensional data that provides a computationally efficient approach to nonlinear dimensionality reduction that has locality-preserving properties and a natural connection to clustering.

Intrinsic Neural Fields: Learning Functions on Manifolds

Intrinsic neural fields can reconstruct high-fidelity textures from images with state-of-the-art quality and are robust to the discretization of the underlying manifold, and demonstrate the versatility of intrinsic neuralfleld by tackling various applications: texture transfer between deformed shapes & different shapes, texture reconstruction from real-world images with view dependence, and discretized-agnostic learning on meshes and point

Vector Neurons: A General Framework for SO(3)-Equivariant Networks

Invariance and equivariance to the rotation group have been widely discussed in the 3D deep learning community for pointclouds. Yet most proposed methods either use complex mathematical tools that