• Corpus ID: 240354020

FC2T2: The Fast Continuous Convolutional Taylor Transform with Applications in Vision and Graphics

@article{Lange2021FC2T2TF,
  title={FC2T2: The Fast Continuous Convolutional Taylor Transform with Applications in Vision and Graphics},
  author={Henning Lange and J. Nathan Kutz},
  journal={ArXiv},
  year={2021},
  volume={abs/2111.00110}
}
Series expansions have been a cornerstone of applied mathematics and engineering for centuries. In this paper, we revisit the Taylor series expansion from a modern Machine Learning perspective. Specifically, we introduce the Fast Continuous Convolutional Taylor Transform (FCT ), a variant of the Fast Multipole Method (FMM), that allows for the efficient approximation of low dimensional convolutional operators in continuous space. We build upon the FMM which is an approximate algorithm that… 

Advances in Neural Rendering

This state‐of‐the‐art report on advances in neural rendering focuses on methods that combine classical rendering principles with learned 3D scene representations, often now referred to as neural scene representations.

Next3D: Generative Neural Texture Rasterization for 3D-Aware Head Avatars

This work proposes a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images and proposes a 3D representation called Generative Texture-Rasterized Tri-planes to achieve both deformation accuracy and topological flexibility.

References

SHOWING 1-10 OF 72 REFERENCES

OptNet: Differentiable Optimization as a Layer in Neural Networks

OptNet is presented, a network architecture that integrates optimization problems (here, specifically in the form of quadratic programs) as individual layers in larger end-to-end trainable deep networks, and shows how techniques from sensitivity analysis, bilevel optimization, and implicit differentiation can be used to exactly differentiate through these layers.

Dual-Tree Fast Gauss Transforms

The extent to which the dual-tree recursion with finite-difference approximation can be integrated with multipole-like Hermite expansions in order to achieve reasonable efficiency across all bandwidth scales is explored, though only for low dimensionalities.

Optimized M2L Kernels for the Chebyshev Interpolation based Fast Multipole Method

Here, several optimizations for the multiple-to-local (M2L) operator are presented, known to be the costliest FMM operator, to reduce the precomputation time and speed up the matrix-vector product.

Fourier Neural Operator for Parametric Partial Differential Equations

This work forms a new neural operator by parameterizing the integral kernel directly in Fourier space, allowing for an expressive and efficient architecture and shows state-of-the-art performance compared to existing neural network methodologies.

Occupancy Networks: Learning 3D Reconstruction in Function Space

This paper proposes Occupancy Networks, a new representation for learning-based 3D reconstruction methods that encodes a description of the 3D output at infinite resolution without excessive memory footprint, and validate that the representation can efficiently encode 3D structure and can be inferred from various kinds of input.

DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation

This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.

NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis

This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.

Speeding up Convolutional Neural Networks with Low Rank Expansions

Two simple schemes for drastically speeding up convolutional neural networks are presented, achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain.

PointProNets: Consolidation of Point Clouds with Convolutional Neural Networks

This work proposes a generative neural network architecture that can input and output point clouds, unlocking a powerful set of tools from the deep learning literature and uses this architecture to apply convolutional neural networks to local patches of geometry for high quality and efficient point cloud consolidation.

Soft Rasterizer: A Differentiable Renderer for Image-Based 3D Reasoning

This work proposes a truly differentiable rendering framework that is able to directly render colorized mesh using differentiable functions and back-propagate efficient supervision signals to mesh vertices and their attributes from various forms of image representations, including silhouette, shading and color images.
...