Pixel-wise Conditioned Generative Adversarial Networks for Image Synthesis and Completion

  title={Pixel-wise Conditioned Generative Adversarial Networks for Image Synthesis and Completion},
  author={Cyprien Ruffino and Romain H'erault and Eric Laloy and Gilles Gasso},

Generative networks as inverse problems with fractional wavelet scattering networks

Generative Fractional Scattering Networks (GFRSNs) are proposed, which use more expressive fractional wavelet scattering networks (FrScatNets) instead of GSNs as the encoder to obtain the features (or FrScatNet embeddings) and use the similar CNNs of G SNP as the decoder to generate the image.

Implicit privacy preservation: a framework based on data generation

An ex-ante implicit privacy-preserving framework based on data generation, called IMPOSTER, which can alleviate the disclosure of implicit privacy while maintaining good data utility and elaborate a theoretical analysis for the convergence of the framework.

Synchronized Information Acquisition Method for Virtual Geographic Scene Image Synthesis in Cities Based on Wireless Network Technology

The comparison experimental data show that the acquisition delay of the studied information synchronization acquisition method is less than 0.5 s, the acquisition synchronization rate is significantly improved, the quality of the synthesized images is better by applying the information acquired by the method, and the practical use is better.

Missing Data Imputation on IoT Sensor Networks: Implications for on-Site Sensor Calibration

VAE technique to outperform the other methods in imputing the missing values at different proportions of missingness on two real-world datasets, and experimental results showed improved calibration performance with imputed dataset.



Patch-Based Image Inpainting with Generative Adversarial Networks

The proposed PGGAN method includes a discriminator network that combines a global GAN (G-GAN) architecture with a patchGAN approach that feeds the generator network in order to capture both local continuity of image texture and pervasive global features in images.

Texture Synthesis with Spatial Generative Adversarial Networks

This is the first successful completely data-driven texture synthesis method based on GANs, and has the following features which make it a state of the art algorithm for texture synthesis: high image quality of the generated textures, very high scalability w.r.t. the output texture size, fast real-time forward generation.

Dilated Spatial Generative Adversarial Networks for Ergodic Image Generation

Architec-tures based on fully convolutional networks, architectures specifically designed to generate globally ergodic images, that is images without global dependencies, are proposed.

High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

A new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs) is presented, which significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.

Large Scale GAN Training for High Fidelity Natural Image Synthesis

It is found that applying orthogonal regularization to the generator renders it amenable to a simple "truncation trick," allowing fine control over the trade-off between sample fidelity and variety by reducing the variance of the Generator's input.

Improved Techniques for Training GANs

This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes.

Diversity-Sensitive Conditional Generative Adversarial Networks

It is shown that simple addition of the proposed regularization to existing models leads to surprisingly diverse generations, substantially outperforming the previous approaches for multi-modal conditional generation specifically designed in each individual task.

Semantic Image Inpainting with Deep Generative Models

A novel method for semantic image inpainting, which generates the missing content by conditioning on the available data, and successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.

Image-to-Image Translation with Conditional Adversarial Networks

Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.

Generative Image Inpainting with Contextual Attention

This work proposes a new deep generative model-based approach which can not only synthesize novel image structures but also explicitly utilize surrounding image features as references during network training to make better predictions.