A Generative Model for Volume Rendering

@article{Berger2019AGM,
  title={A Generative Model for Volume Rendering},
  author={Matthew Berger and Jixian Li and Joshua A. Levine},
  journal={IEEE Transactions on Visualization and Computer Graphics},
  year={2019},
  volume={25},
  pages={1636-1650}
}
We present a technique to synthesize and analyze volume-rendered images using generative models. [] Key Method We show how to guide the user in transfer function editing by quantifying expected change in the output image. Additionally, the generative model transforms transfer functions into a view-invariant latent space specifically designed to synthesize volume-rendered images. We use this space directly for rendering, enabling the user to explore the space of volume-rendered images. As our model is…
Differentiable Direct Volume Rendering
TLDR
A differentiable volume rendering solution that provides differentiability of all continuous parameters of the volume rendering process and introduces a novel approach for tomographic reconstruction from images using an emission-absorption model with post-shading via an arbitrary transfer function.
Deep Direct Volume Rendering: Learning Visual Feature Mappings From Exemplary Images
TLDR
This work introduces Deep Direct Volume Rendering (DeepDVR), a generalization of DVR that allows for the integration of deep neural networks into the DVR algorithm, and conceptualizes the rendering in a latent color space, thus enabling the use of deep architectures to learn implicit mappings for feature extraction and classification.
A Supervised Generative Model for Efficient Rendering of Medical Volume Data
  • M. Gavrilescu
  • Computer Science
    2020 International Conference on e-Health and Bioengineering (EHB)
  • 2020
TLDR
This work proposes a generative model based on a deep neural network which is continually trainable from data dynamically generated by a GPU-based renderer, and allows the user to generate high-resolution images on low-spec hardware without the need for a GPU, and without access to sensitive or protected patient data.
DNN-VolVis: Interactive Volume Visualization Supported by Deep Neural Network
TLDR
Deep neural networks, combined usage of Generative Adversarial Networks (GANs) and Convolutional Neural Networks (CNN) are employed to synthesize high-resolution and perceptually authentic images directly, inheriting the desired transfer function and viewing parameter implicitly given by the input images respectively.
Cinema Darkroom: A Deferred Rendering Framework for Large-Scale Datasets
TLDR
This paper demonstrates the use of Cinema Darkroom on several real-world datasets, highlighting CD’s ability to effectively decouple the complexity and size of the dataset from its visualization.
Fast Neural Representations for Direct Volume Rendering
TLDR
This paper proposes a novel design of scene representation networks using GPU tensor cores to integrate the reconstruction seamlessly into on-chip raytracing kernels, and compares the quality and performance of this network to alternative networkand non-network-based compression schemes.
Volumetric Isosurface Rendering with Deep Learning-Based Super-Resolution
TLDR
A fully convolutional neural network is introduced, to learn a latent representation generating smooth, edge-aware depth andnormal fields as well as ambient occlusions from a low resolution depth and normal field, by adding a frame-to-frame motion loss into the learning stage, so upscaling can consider temporal variations and achieves improved frame- to-frame coherence.
Deep Volumetric Ambient Occlusion
TLDR
The proposed Deep Volumetric Ambient Occlusion (DVAO) approach can predict per-voxel ambient occlusion in volumetric data sets, while considering global information provided through the transfer function, and thus supports real-time volume interaction.
FrankenGAN: Guided Detail Synthesis for Building Mass-Models Using Style-Synchonized GANs
TLDR
This work proposes FrankenGAN, a cascade of GANs to create plausible details across multiple scales over large neighborhoods to create consistent style distributions over buildings and neighborhoods, and provides the user with direct control over the variability of the output.
A real-time image-centric transfer function design based on incremental classification
TLDR
A novel image-centric method for the real-time generation of transfer functions based on incremental classification and a novel incremental classifier, namely incremental discriminant-based support vector machine( IDSVM), that can learn through time is introduced.
...
...

References

SHOWING 1-10 OF 62 REFERENCES
Scribbler: Controlling Deep Image Synthesis with Sketch and Color
TLDR
A deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces is proposed and demonstrates a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects.
Interactive Transfer Function Design Based on Editing Direct Volume Rendered Images
  • Yingcai Wu, Huamin Qu
  • Computer Science
    IEEE Transactions on Visualization and Computer Graphics
  • 2007
TLDR
This paper proposes a framework for editing DVRIs, which can also be used for interactive transfer function (TF) design, and shows how these editing operations can generate smooth animations for focus + context visualization.
State of the Art in Transfer Functions for Direct Volume Rendering
TLDR
The purpose of this state‐of‐the‐art report (STAR) is to provide an overview of research into the various aspects of TFs, which lead to interpretation of the underlying data through the use of meaningful visual representations.
Multidimensional Transfer Functions for Interactive Volume Rendering
TLDR
An important class of 3D transfer functions for scalar data is demonstrated, and the application of multi-dimensional transfer functions to multivariate data is described, and a set of direct manipulation widgets that make specifying such transfer functions intuitive and convenient are presented.
Generative Image Modeling Using Style and Structure Adversarial Networks
TLDR
This paper factorize the image generation process and proposes Style and Structure Generative Adversarial Network, a model that is interpretable, generates more realistic images and can be used to learn unsupervised RGBD representations.
Curvature-based transfer functions for direct volume rendering: methods and applications
TLDR
The proposed methodology combines an implicit formulation of curvature with convolution-based reconstruction of the field, and gives concrete guidelines for implementing the methodology, and illustrates the importance of choosing accurate filters for computing derivatives with Convolution.
Size-based Transfer Functions: A New Volume Exploration Technique
TLDR
This paper introduces size-based transfer functions, which map the local scale of features to color and opacity, and shows that they can improve classification and enhance volume rendering techniques, such as maximum intensity projection.
Local Histograms for Design of Transfer Functions in Direct Volume Rendering
TLDR
This paper uses histograms of local neighborhoods to capture tissue characteristics to perform a classification where the tissue-type certainty is treated as a second TF dimension and results in an enhanced rendering where tissues with overlapping intensity ranges can be discerned without requiring the user to explicitly define a complex, multidimensional TF.
COVRA: A compression‐domain output‐sensitive volume rendering architecture based on a sparse representation of voxel blocks
TLDR
A novel multiresolution compression‐domain GPU volume rendering architecture designed for interactive local and networked exploration of rectilinear scalar volumes on commodity platforms and demonstrated on massive static and time‐varying datasets.
StackGAN: Text to Photo-Realistic Image Synthesis with Stacked Generative Adversarial Networks
TLDR
This paper proposes Stacked Generative Adversarial Networks (StackGAN) to generate 256 photo-realistic images conditioned on text descriptions and introduces a novel Conditioning Augmentation technique that encourages smoothness in the latent conditioning manifold.
...
...