• Corpus ID: 227208689

3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer

  title={3DSNet: Unsupervised Shape-to-Shape 3D Style Transfer},
  author={Mattia Segu and Margarita Grinvald and Roland Y. Siegwart and Federico Tombari},
Transferring the style from one image onto another is a popular and widely studied task in computer vision. Yet, learning-based style transfer in the 3D setting remains a largely unexplored problem. To our knowledge, we propose the first learning-based generative approach for style transfer between 3D objects. Our method allows to combine the content and style of a source and target 3D model to generate a novel shape that resembles in style the target while retaining the source content. The… 
SNeRF: Stylized Neural Implicit Representations for 3D Scenes
This paper introduces a new training method to address 3D scene stylization that provides a strong inductive bias for consistent novel view synthesis and enables us to make full use of the authors' hardware memory capacity to both generate images at higher resolution and adopt more expressive image style transfer methods.
Text2Mesh: Text-Driven Neural Stylization for Meshes
The Text2Mesh framework, which stylizes a 3D mesh by predicting color and local geometric details which conform to a target text prompt, and the ability of the technique to synthesize a myriad of styles over a wide variety of 3D meshes is demonstrated.
FaceTuneGAN: Face Autoencoder for Convolutional Expression Transfer Using Neural Generative Adversarial Networks
FaceTuneGAN has a better identity decomposition and face neutralization than state-of-the-art techniques and outperforms classical deformation transfer approach by predicting blendshapes closer to ground-truth data and with less of undesired artifacts due to too different facial morphologies between source and target.
Wasserstein Patch Prior for Image Superresolution
A Wasserstein patch prior for superresolution of twoand three-dimensional images is introduced and the proposed regularizer penalizes the W2-distance of the patch distribution of the reconstruction to thePatch distribution of some reference image at different scales.
UVStyle-Net: Unsupervised Few-shot Learning of 3D Style Similarity Measure for B-Reps
UVStyle-Net is proposed, a style similarity measure for B-Reps that leverages the style signals in the second order statistics of the activations in a pre-trained (unsupervised) 3D encoder, and learns their relative importance to a subjective end-user through few-shot learning.
2021-03567-Doctorant F/H [CORDIC2021-TITANE] Learning the geometric signature of CAD models
  • Education
  • 2021
The Inria Sophia Antipolis Méditerranée center counts 34 research teams as well as 7 support departments. The center's staff (about 500 people including 320 Inria employees) is made up of scientists
PhD position in Computer Vision and Machine Learning Learning the geometric signature of CAD models
  • Computer Science
  • 2021
To be competitive with user-guided CAD, automated geometric modeling must be able to detect, learn and replicate these variations, and could be tackled by learning and exploiting such expert knowledge.


Neural Mesh Flow: 3D Manifold Mesh Generationvia Diffeomorphic Flows
This work proposes Neural Mesh Flow (NMF) to generate two-manifold meshes for genus-0 shapes using a shape auto-encoder consisting of several Neural Ordinary Differential Equation blocks that learn accurate mesh geometry by progressively deforming a spherical mesh.
Multimodal Unsupervised Image-to-Image Translation
A Multimodal Unsupervised Image-to-image Translation (MUNIT) framework that assumes that the image representation can be decomposed into a content code that is domain-invariant, and a style code that captures domain-specific properties.
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.
ShapeNet: An Information-Rich 3D Model Repository
ShapeNet contains 3D models from a multitude of semantic categories and organizes them under the WordNet taxonomy, a collection of datasets providing many semantic annotations for each 3D model such as consistent rigid alignments, parts and bilateral symmetry planes, physical sizes, keywords, as well as other planned annotations.
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks
The architecture introduced in this paper learns a mapping function G : X 7→ Y using an adversarial loss such thatG(X) cannot be distinguished from Y , whereX and Y are images belonging to two
PSNet: A Style Transfer Network for Point Cloud Stylization on Geometry and Color
Experimental results and analysis demonstrate the capability of the proposed neural style transfer method for stylizing a point cloud either from another point cloud or an image.
Neural Cages for Detail-Preserving 3D Deformations
The method extends a traditional cage-based deformation technique, where the source shape is enclosed by a coarse control mesh termed cage, and translations prescribed on the cage vertices are interpolated to any point on the source mesh via special weight functions.
A Papier-Mache Approach to Learning 3D Surface Generation
This work introduces a method for learning to generate the surface of 3D shapes as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape.
Style-content separation by anisotropic part scales
It is shown that confining analysis to within a style cluster facilitates tasks such as co-segmentation, content classification, and deformation-driven part correspondence, and style transfer can be easily performed.
Unsupervised Image-to-Image Translation Networks
This work makes a shared-latent space assumption and proposes an unsupervised image-to-image translation framework based on Coupled GANs that achieves state-of-the-art performance on benchmark datasets.