C-Flow: Conditional Generative Flow Models for Images and 3D Point Clouds

@article{Pumarola2020CFlowCG,
  title={C-Flow: Conditional Generative Flow Models for Images and 3D Point Clouds},
  author={Albert Pumarola and Stefan Popov and Francesc Moreno-Noguer and Vittorio Ferrari},
  journal={2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2020},
  pages={7946-7955}
}
Flow-based generative models have highly desirable properties like exact log-likelihood evaluation and exact latent-variable inference, however they are still in their infancy and have not received as much attention as alternative generative models. In this paper, we introduce C-Flow, a novel conditioning scheme that brings normalizing flows to an entirely new scenario with great possibilities for multimodal data modeling. C-Flow is based on a parallel sequence of invertible mappings in which a… Expand
Discrete Point Flow Networks for Efficient Point Cloud Generation
TLDR
A latent variable model is introduced that builds on normalizing flows with affine coupling layers to generate 3D point clouds of an arbitrary size given a latent shape representation and offers a significant speedup in both training and inference times for similar or better performance. Expand
MM-Flow: Multi-modal Flow Network for Point Cloud Completion
TLDR
A flow-based network together with a multi-modal mapping strategy for 3D point cloud completion, trained using a single loss named the negative log-likelihood to capture the distribution variations between input and output, without complex reconstruction loss and adversarial loss is proposed. Expand
Unsupervised Learning of Fine Structure Generation for 3D Point Clouds by 2D Projection Matching
TLDR
This work casts 3D point cloud learning as a 2D projection matching problem, and introduces structure adaptive sampling to randomly sample 2D points within the silhouettes as an irregular point supervision, which alleviates the consistency issue of sampling from different view angles. Expand
HSGAN: Hierarchical Graph Learning for Point Cloud Generation
  • Yushi Li, G. Baciu
  • Computer Science, Medicine
  • IEEE Transactions on Image Processing
  • 2021
TLDR
This work proposes a novel Generative Adversarial Network (GAN), named HSGAN, or Hierarchical Self-Attention GAN, with remarkable properties for 3D shape generation, and presents a new adversarial loss to maintain the training stability and overcome the potential mode collapse of traditional GANs. Expand
DehazeFlow: Multi-scale Conditional Flow Network for Single Image Dehazing
TLDR
DhazeFlow is proposed, a novel single image dehazing framework based on conditional normalizing flow that surpasses the state-of-the-art methods in terms of PSNR, SSIM, LPIPS, and subjective visual effects. Expand
H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction
TLDR
This paper endows coordinate-based representations with a probabilistic shape prior that enables faster convergence and better generalization when using few input images, and achieves high-fidelity head reconstructions with a high level of detail that consistently outperforms both state-of-the-art 3D Morphable Models methods in the few-shot scenario, and nonparametric methods when large sets of views are available. Expand
SynLiDAR: Learning From Synthetic LiDAR Sequential Point Cloud for Semantic Segmentation
TLDR
Extensive experiments over multiple data augmentation and semi-supervised semantic segmentation tasks show very positive outcome including SynLiDAR can either train better models or reduce real-world annotated data without sacrificing performance, and PCT-Net translated data further improve model performance consistently. Expand
Generative Flows with Invertible Attentions
TLDR
This paper proposes map-based and scaled dot-product attention for unconditional and conditional generative flow models, to exploit split-based attention mechanisms to learn the attention weights and input representations on every two splits of flow feature maps. Expand
Normalizing Flow as a Flexible Fidelity Objective for Photo-Realistic Super-resolution
TLDR
This work revisits the L1 loss and shows that it corresponds to a one-layer conditional flow, which is inspired by this relation and demonstrates that the flexibility of deeper flows leads to better visual quality and consistency when combined with adversarial losses. Expand
Low-Light Image Enhancement with Normalizing Flow
TLDR
This paper investigates to model this one-to-many relationship via a proposed normalizing flow model of an invertible network that takes the low-light images/features as the condition and learns to map the distribution of normally exposed images into a Gaussian distribution. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 66 REFERENCES
PointFlow: 3D Point Cloud Generation With Continuous Normalizing Flows
TLDR
A principled probabilistic framework to generate 3D point clouds by modeling them as a distribution of distributions with the invertibility of normalizing flows enables the computation of the likelihood during training and allows the model to train in the variational inference framework. Expand
A Point Set Generation Network for 3D Object Reconstruction from a Single Image
TLDR
This paper addresses the problem of 3D reconstruction from a single image, generating a straight-forward form of output unorthordox, and designs architecture, loss function and learning paradigm that are novel and effective, capable of predicting multiple plausible 3D point clouds from an input image. Expand
Learning Single-Image 3D Reconstruction by Generative Modelling of Shape, Pose and Shading
TLDR
A unified framework tackling two problems: class-specific 3D reconstruction from a single image, and generation of new 3D shape samples, that can learn to generate and reconstruct concave object classes and supports concave classes such as bathtubs and sofas, which methods based on silhouettes cannot learn. Expand
Learning 3D Shape Completion from Laser Scan Data with Weak Supervision
TLDR
This work proposes a weakly-supervised learning-based approach to 3D shape completion which neither requires slow optimization nor direct supervision and is able to compete with a fully supervised baseline and a state-of-the-art data-driven approach while being significantly faster. Expand
AtlasNet: A Papier-M\^ach\'e Approach to Learning 3D Surface Generation
TLDR
A method for learning to generate the surface of 3D shapes as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape. Expand
Conditional Adversarial Generative Flow for Controllable Image Synthesis
TLDR
This paper proposes a novel flow-based generative model named conditional adversarial generative flow (CAGlow), which can synthesize images with conditional information like categories, attributes, and even some unknown properties and ensures the independence of different conditions and outperforms regular Glow to a significant extent. Expand
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
TLDR
This paper designs a novel type of neural network that directly consumes point clouds, which well respects the permutation invariance of points in the input and provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. Expand
Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images
TLDR
An end-to-end deep learning architecture that produces a 3D shape in triangular mesh from a single color image by progressively deforming an ellipsoid, leveraging perceptual features extracted from the input image. Expand
High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs
TLDR
A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing. Expand
FoldingNet: Point Cloud Auto-Encoder via Deep Grid Deformation
TLDR
A novel end-to-end deep auto-encoder is proposed to address unsupervised learning challenges on point clouds, and is shown, in theory, to be a generic architecture that is able to reconstruct an arbitrary point cloud from a 2D grid. Expand
...
1
2
3
4
5
...