Initialization and Alignment for Adversarial Texture Optimization

@inproceedings{Zhao2022InitializationAA,
  title={Initialization and Alignment for Adversarial Texture Optimization},
  author={Xiaoming Zhao and Zhizhen Zhao and Alexander G. Schwing},
  booktitle={European Conference on Computer Vision},
  year={2022}
}
While recovery of geometry from image and video data has received a lot of attention in computer vision, methods to capture the texture for a given geometry are less mature. Specifically, classical methods for texture generation often assume clean geometry and reasonably well-aligned image data. While very recent methods, e.g., adversarial texture optimization, better handle lower-quality data obtained from hand-held devices, we find them to still struggle frequently. To improve robustness… 

Figures and Tables from this paper

References

SHOWING 1-10 OF 56 REFERENCES

Patch-based optimization for image-based texture mapping

This paper uses patch-based synthesis to reconstruct a set of photometrically-consistent aligned images by drawing information from the source images by using patch search and vote, and reconstruction.

Shape and Viewpoint without Keypoints

We present a learning framework that learns to recover the 3D shape, pose and texture from a single image, trained on an image collection without any ground truth 3D shape, multi-view, camera

Color adjustment in image-based texture maps

Texture Mapping for 3D Reconstruction with RGB-D Sensor

This paper first adaptively selects an optimal image for each face of the 3D model, which can effectively remove blurring and ghost artifacts produced by multiple image blending, and adopts a non-rigid global-to-local correction step to reduce the seaming effect between textures.

Deferred neural rendering

This work proposes Neural Textures, which are learned feature maps that are trained as part of the scene capture process that can be utilized to coherently re-render or manipulate existing video content in both static and dynamic environments at real-time rates.

NeuTex: Neural Texture Mapping for Volumetric Neural Rendering

By separating geometry and texture, this work allows users to edit appearance by simply editing 2D texture maps and demonstrates that this representation can be reconstructed using only multi-view image supervision and generates high-quality rendering results.

DeepVoxels: Learning Persistent 3D Feature Embeddings

This work proposes DeepVoxels, a learned representation that encodes the view-dependent appearance of a 3D scene without having to explicitly model its geometry, based on a Cartesian 3D grid of persistent embedded features that learn to make use of the underlying3D scene structure.

Photorealistic Facial Texture Inference Using Deep Neural Networks

A data-driven inference method is presented that can synthesize a photorealistic texture map of a complete 3D face model given a partial 2D view of a person in the wild and successful face reconstructions from a wide range of low resolution input images are demonstrated.

Let There Be Color! Large-Scale Texturing of 3D Reconstructions

This work presents the first comprehensive texturing framework for large-scale, real-world 3D reconstructions, and addresses most challenges occurring in such reconstructions: the large number of input images, their drastically varying properties such as image scale, (out-of-focus) blur, exposure variation, and occluders.

Seamless Mosaicing of Image-Based Texture Maps

  • V. LempitskyD. Ivanov
  • Computer Science
    2007 IEEE Conference on Computer Vision and Pattern Recognition
  • 2007
Unlike previous approaches to the same problem, intensity blending as well as image resampling are avoided on all stages of the process, which ensures that the resolution of the produced texture is essentially the same as that of the original views.
...