Generative Models as Distributions of Functions
@inproceedings{Dupont2021GenerativeMA, title={Generative Models as Distributions of Functions}, author={Emilien Dupont and Yee Whye Teh and A. Doucet}, booktitle={International Conference on Artificial Intelligence and Statistics}, year={2021} }
Generative models are typically trained on grid-like data such as images. As a result, the size of these models usually scales directly with the underlying grid resolution. In this paper, we abandon discretized grids and instead parameterize individual data points by continuous functions. We then build generative models by learning distributions over such functions. By treating data points as functions, we can abstract away from the spe-cific type of data we train on and construct models that…
Figures and Tables from this paper
40 Citations
FunkNN: Neural Interpolation for Functional Generation
- Mathematics, Computer ScienceArXiv
- 2022
FunkNN is a new convolutional network which learns how to reconstruct continuous images at arbitrary coordinates and can be applied to any image dataset and becomes a functional generator which can act as a prior in continuous ill-posed inverse problems.
Convolutional Neural Processes for Inpainting Satellite Images
- Mathematics, Environmental ScienceArXiv
- 2022
This work cast satellite image inpainting as a natural meta-learning problem, and proposes using convolutional neural processes (ConvNPs) where each satellite image is framed as its own task or 2D regression problem, which shows ConvNPs can outperform classical methods and state-of-the-art deep learning in painting models on a scanline inPainting problem.
Generative Adversarial Neural Operators
- Computer ScienceArXiv
- 2022
In this work, GANO is instantiate using the Wasserstein criterion and it is shown how the Waderstein loss can be computed in in infinite-dimensional spaces.
Learning Signal-Agnostic Manifolds of Neural Fields
- Computer ScienceNeurIPS
- 2021
This model — dubbed GEM — learns to capture the underlying structure of datasets across modalities in image, shape, audio and cross-modal audiovisual domains in a modality-independent manner and shows that by walking across the underlying manifold of GEM, the model may generate new samples in the signal domains.
From data to functa: Your data point is a function and you should treat it like one
- Computer ScienceArXiv
- 2022
This paper refers to the data as functa, and proposes a framework for deep learning on funCTa, which has various compelling properties across data modalities, in particular on the canonical tasks of generative modeling, data imputation, novel view synthesis and classification.
From data to functa: Your data point is a function and you can treat it like one
- Computer ScienceICML
- 2022
It is demonstrated that the proposed framework for deep learning on functa has various compelling properties across data modalities, in particular on the canonical tasks of generative modeling, data imputation, novel view synthesis and classification.
neural compression workshop COIN : CO MPRESSION WITH I MPLICIT N EURAL REPRESENTATIONS
- Computer Science
- 2021
A new simple approach for image compression: instead of storing the RGB values for each pixel of an image, the weights of a neural network overfitted to the image are stored, and this approach outperforms JPEG at low bit-rates, even without entropy coding or learning a distribution over weights.
COIN: COmpression with Implicit Neural representations
- Computer ScienceICLR 2021
- 2021
A new simple approach for image compression: instead of storing the RGB values for each pixel of an image, the weights of a neural network overfitted to the image are stored, and this approach outperforms JPEG at low bit-rates, even without entropy coding or learning a distribution over weights.
DeepTIMe: Deep Time-Index Meta-Learning for Non-Stationary Time-Series Forecasting
- Computer ScienceArXiv
- 2022
This paper proposes DeepTIMe, a deep time-index based model trained via a meta-learning formulation which overcomes limitations, yielding an efficient and accurate forecasting model and achieves competitive results with state-of-the-art methods.
COIN++: Data Agnostic Neural Compression
- Computer ScienceArXiv
- 2022
This paper proposes COIN++, a neural compression framework that seamlessly handles a wide range of data modalities, and quantize and entropy code these modulations, leading to large compression gains while reducing encoding time by two orders of magnitude compared to baselines.
References
SHOWING 1-10 OF 78 REFERENCES
SAL: Sign Agnostic Learning of Shapes From Raw Data
- Computer Science2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2020
This paper introduces Sign Agnostic Learning (SAL), a deep learning approach for learning implicit shape representations directly from raw, unsigned geometric data, such as point clouds and triangle soups, and believes it opens the door to many geometric deep learning applications with real-world data.
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
- Computer Science2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
- 2017
This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.
Implicit Neural Representations with Periodic Activation Functions
- Computer ScienceNeurIPS
- 2020
This work proposes to leverage periodic activation functions for implicit neural representations and demonstrates that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.
Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations
- Computer ScienceNeurIPS
- 2019
The proposed Scene Representation Networks (SRNs), a continuous, 3D-structure-aware scene representation that encodes both geometry and appearance, are demonstrated by evaluating them for novel view synthesis, few-shot reconstruction, joint shape and appearance interpolation, and unsupervised discovery of a non-rigid face model.
DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.
PointConv: Deep Convolutional Networks on 3D Point Clouds
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
The dynamic filter is extended to a new convolution operation, named PointConv, which can be applied on point clouds to build deep convolutional networks and is able to achieve state-of-the-art on challenging semantic segmentation benchmarks on 3D point clouds.
Which Training Methods for GANs do actually Converge?
- Computer ScienceICML
- 2018
This paper describes a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent, and extends convergence results to more general GANs and proves local convergence for simplified gradient penalties even if the generator and data distribution lie on lower dimensional manifolds.
Progressive Growing of GANs for Improved Quality, Stability, and Variation
- Computer ScienceICLR
- 2018
A new training methodology for generative adversarial networks is described, starting from a low resolution, and adding new layers that model increasingly fine details as training progresses, allowing for images of unprecedented quality.
Deep Learning on Point Sets for 3 D Classification and Segmentation
- Computer Science
- 2016
This paper designs a novel type of neural network that directly consumes point clouds and well respects the permutation invariance of points in the input, and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing.
PointNet++: Deep Hierarchical Feature Learning on Point Sets in a Metric Space
- Computer ScienceNIPS
- 2017
A hierarchical neural network that applies PointNet recursively on a nested partitioning of the input point set and proposes novel set learning layers to adaptively combine features from multiple scales to learn deep point set features efficiently and robustly.