Unified Implicit Neural Stylization

  title={Unified Implicit Neural Stylization},
  author={Zhiwen Fan and Yi-fan Jiang and Peihao Wang and Xinyu Gong and Dejia Xu and Zhangyang Wang},
We Abstract. Representing visual signals by implicit representation (e.g., a coordinate based deep network) has prevailed among many vision tasks. This work explores a new intriguing direction: training a stylized implicit representation, using a generalized approach that can apply to various 2D and 3D scenarios. We conduct a pilot study on a variety of implicit functions, including 2D coordinate-based representation, neural radiance field, and signed distance function. Our solution is a… 

Figures and Tables from this paper


Stylizing 3D Scene via Implicit Representation and HyperNetwork
This work proposes a joint framework to directly render novel views with the desired style with a two-stage training procedure and a patch sub-sampling approach to optimize the style and content losses with the neural radiance fields model.
Implicit Neural Representations with Periodic Activation Functions
This work proposes to leverage periodic activation functions for implicit neural representations and demonstrates that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives.
NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis
This work describes how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrates results that outperform prior work on neural rendering and view synthesis.
Very Deep Convolutional Networks for Large-Scale Image Recognition
This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation
This work introduces DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data.
Arbitrary Style Transfer in Real-Time with Adaptive Instance Normalization
This paper presents a simple yet effective approach that for the first time enables arbitrary style transfer in real-time, comparable to the fastest existing approach, without the restriction to a pre-defined set of styles.
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
A Neural Algorithm of Artistic Style
This work introduces an artificial system based on a Deep Neural Network that creates artistic images of high perceptual quality and offers a path forward to an algorithmic understanding of how humans create and perceive artistic imagery.
Implicit Neural Video Compression
The method, which is called implicit pixel flow (IPF), offers several simplifications over established neural video codecs: it does not require the receiver to have access to a pretrained neural network, does not use expensive interpolation-based warping operations, anddoes not require a separate training dataset.
Efficient Geometry-aware 3D Generative Adversarial Networks
This work introduces an expressive hybrid explicit-implicit network architecture that synthesizes not only high-resolution multi-view-consistent images in real time but also produces high-quality 3D geometry.