A Structured Dictionary Perspective on Implicit Neural Representations
@article{Yce2021ASD, title={A Structured Dictionary Perspective on Implicit Neural Representations}, author={Gizem Y{\"u}ce and Guillermo Ortiz-Jim{\'e}nez and Beril Besbinar and Pascal Frossard}, journal={2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2021}, pages={19206-19216} }
Implicit neural representations (INRs) have recently emerged as a promising alternative to classical discretized representations of signals. Nevertheless, despite their practical success, we still do not understand how INRs represent signals. We propose a novel unified perspective to theoretically analyse INRs. Leveraging results from harmonic analysis and deep learning theory, we show that most INR families are analogous to structured signal dictionaries whose atoms are integer harmonics of…
Figures from this paper
16 Citations
Deep Learning on Implicit Neural Representations of Shapes
- Computer ScienceArXiv
- 2023
It is verified that inr2vec can embed effectively the 3D shapes represented by the input INRs and shown how the produced embeddings can be fed into deep learning pipelines to solve several tasks by processing exclusively INRs.
DINER: Disorder-Invariant Implicit Neural Representation
- Computer ScienceArXiv
- 2022
It is found that a frequency-related problem could be largely solved by re-arranging the coordinates of the input signal, for which the disorder-invariant implicit neural representation (DINER) is proposed by augmenting a hash-table to a traditional INR backbone.
Frequency-Modulated Point Cloud Rendering with Easy Editing
- Computer ScienceArXiv
- 2023
An effective point cloud rendering pipeline for novel view synthesis, which enables high fidelity local detail reconstruction, real-time rendering and user-friendly editing and high-fidelity interactive editing based on point cloud manipulation is developed.
SplineCam: Exact Visualization and Characterization of Deep Network Geometry and Decision Boundaries
- Computer ScienceArXiv
- 2023
This paper develops the first provably exact method for computing the geometry of a DN's mapping - including its decision boundary - over a specified region of the data space by leveraging the theory of Continuous Piece-Wise Linear spline DNs.
WIRE: Wavelet Implicit Neural Representations
- Computer ScienceArXiv
- 2023
Wavelet Implicit neural REpresentation (WIRE) uses a continuous complex Gabor wavelet activation function that is well-known to be optimally concentrated in space-frequency and to have excellent biases for representing images.
Deformable Surface Reconstruction via Riemannian Metric Preservation
- MathematicsArXiv
- 2022
Estimating the pose of an object from a monocular image is an inverse problem fundamental in computer vision. The ill-posed nature of this problem requires incorporating deformation priors to solve…
StegaNeRF: Embedding Invisible Information within Neural Radiance Fields
- Computer ScienceArXiv
- 2022
StegaNeRF is an initial exploration into the novel problem of instilling customizable, imperceptible, and recoverable information to NeRF renderings, with minimal impact to rendered images.
TITAN: Bringing The Deep Image Prior to Implicit Representations
- Computer Science
- 2022
This paper proposes to address and improve INRs’ interpolation capabilities by explicitly inte-grating image prior information into the INR architecture via deep decoder, a specific implementation of the deep image prior (DIP).
Continuous conditional video synthesis by neural processes
- Computer ScienceArXiv
- 2022
It is shown that conditional video synthesis can be formulated as a neural process, which maps input spatio-temporal coordinates to target pixel values given context spatio’s temporal coordinates and pixels values, and is able to interpolate or predict with an arbitrary high frame rate.
Sobolev Training for Implicit Neural Representations with Approximated Image Derivatives
- Computer ScienceECCV
- 2022
This paper proposes a training paradigm for INRs whose target output is image pixels, to encode image derivatives in addition to image values in the neural network, and uses finite differences to approximate image derivatives.
References
SHOWING 1-10 OF 58 REFERENCES
Ringing ReLUs: Harmonic Distortion Analysis of Nonlinear Feedforward Networks
- Computer ScienceICLR
- 2021
Harmonic distortion analysis is applied to understand the effect of nonlinearities in the spectral domain, which generates higherfrequency harmonics, whose magnitude increases with network depth, thereby increasing the “roughness” of the output landscape.
Multiplicative Filter Networks
- Computer ScienceICLR
- 2021
This paper proposes and empirically demonstrate that an arguably simpler class of function approximators can work just as well for low-dimensional-but-complex functions: multiplicative filter networks.
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
- Computer ScienceNeurIPS
- 2020
An approach for selecting problem-specific Fourier features that greatly improves the performance of MLPs for low-dimensional regression tasks relevant to the computer vision and graphics communities is suggested.
Learned Initializations for Optimizing Coordinate-Based Neural Representations
- Computer Science2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
Standard meta-learning algorithms are proposed to be applied to learn the initial weight parameters for fully-connected coordinate-based neural representations based on the underlying class of signals being represented, enabling faster convergence during optimization and resulting in better generalization when only partial observations of a given signal are available.
Implicit Neural Representations with Periodic Activation Functions
- Computer ScienceNeurIPS
- 2020
This work proposes to leverage periodic activation functions for implicit neural representations and demonstrates that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.
Neural tangent kernel: convergence and generalization in neural networks (invited paper)
- Computer ScienceNeurIPS
- 2018
This talk will introduce this formalism and give a number of results on the Neural Tangent Kernel and explain how they give us insight into the dynamics of neural networks during training and into their generalization features.
Adam: A Method for Stochastic Optimization
- Computer ScienceICLR
- 2015
This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Learning Continuous Representation of Audio for Arbitrary Scale Super Resolution
- Computer ScienceICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)
- 2022
A method of implicit neural representation, coined Local Implicit representation for Super resolution of Arbitrary scale (LISA), which locally parameterizes a chunk of audio as a function of continuous time, and represents each chunk with the local latent codes of neighboring chunks so that the function can extrapolate the signal at any time coordinate.
Advances in Neural Rendering
- Computer ScienceSIGGRAPH Courses
- 2021
This state‐of‐the‐art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations.
Depth-supervised NeRF: Fewer Views and Faster Training for Free
- Computer Science2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2022
This work formalizes the above assumption through DS-NeRF (Depth-supervised Neural Radiance Fields), a loss for learning radiance fields that takes advantage of readily-available depth supervision and can render better images given fewer training views while training 2-3x faster.