A Content Transformation Block for Image Style Transfer

@article{Kotovenko2019ACT,
  title={A Content Transformation Block for Image Style Transfer},
  author={Dmytro Kotovenko and Artsiom Sanakoyeu and Pingchuan Ma and Sabine Lang and Bj{\"o}rn Ommer},
  journal={2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2019},
  pages={10024-10033}
}
Style transfer has recently received a lot of attention, since it allows to study fundamental challenges in image understanding and synthesis. [] Key Method Moreover, we utilize similar content appearing in photographs and style samples to learn how style alters content details and we generalize this to other class details. Additionally, this work presents a novel normalization layer critical for high resolution image synthesis. The robustness and speed of our model enables a video stylization in real-time…

Figures and Tables from this paper

Geometric Style Transfer

TLDR
This work introduces a neural architecture that supports transfer of geometric style, and provides user studies that show the quality of the output, and quantifies the importance of geometricstyle transfer to style recognition by humans.

Name Your Style: An Arbitrary Artist-aware Image Style Transfer

TLDR
This paper introduces a contrastive training strategy to effectively extract style descriptions from the image-text model (i.e., CLIP), which aligns stylization with the text description, and proposes a novel and efficient attention module that explores cross-attentions to fuse style and content features.

Two-stream FCNs to balance content and style for style transfer

TLDR
This paper proposes an end-to-end two-stream fully convolutional networks (FCNs) aiming at balancing the contributions of the content and the style in rendered images and generates more balanced stylized images in content and style than state-of-the-art methods.

Domain Enhanced Arbitrary Image Style Transfer via Contrastive Learning

TLDR
This work presents Contrastive Arbitrary Style Transfer (CAST), which is a new style representation learning and style transfer method via contrastive learning that achieves significantly better results compared to those obtained via state-of-the-art methods.

Anisotropic Stroke Control for Multiple Artists Style Transfer

TLDR
This work designs an Anisotropic Stroke Module (ASM) which realizes the dynamic adjustment of style-stroke between the non-trivial and the trivial regions and presents an novel Multi-Scale Projection Discriminator to realize the texture-level conditional generation.

Two-Stage Peer-Regularized Feature Recombination for Arbitrary Image Style Transfer

TLDR
This paper introduces a neural style transfer model to generate a stylized image conditioning on a set of examples describing the desired style that can be trained fully end-to-end thanks to a new set of cyclic losses that operate directly in latent space and not on the RGB images.

Diverse Image Style Transfer via Invertible Cross-Space Mapping

Image style transfer aims to transfer the styles of artworks onto arbitrary photographs to create novel artistic images. Although style transfer is inherently an underdetermined problem, existing

Language-Driven Image Style Transfer

TLDR
This work proposes contrastive language visual artist (CLVA) that learns to extract visual semantics from style instructions and accomplish LDAST by the patch-wise style discriminator, and compares contrastive pairs of content images and style instructions to improve the mutual relativeness.

Interactive Style Transfer: All is Your Palette

—Neural style transfer (NST) can create impressive artworks by transferring reference style to content image. Current image-to-image NST methods are short of fine-grained controls, which are often

Arbitrary Style Transfer via Multi-Adaptation Network

TLDR
The proposed multi-adaptation network leads to better results than the state-of-the-art style transfer methods and enables main style patterns and exact content structures to adapt to various input images, respectively.

References

SHOWING 1-10 OF 38 REFERENCES

A Style-Aware Content Loss for Real-time HD Style Transfer

TLDR
A style-aware content loss is proposed, which is trained jointly with a deep encoder-decoder network for real-time, high-resolution stylization of images and videos and results show that this approach better captures the subtle nature in which a style affects content.

Fast Patch-based Style Transfer of Arbitrary Style

TLDR
A simpler optimization objective based on local matching that combines the content structure and style textures in a single layer of the pretrained network is proposed that has desirable properties such as a simpler optimization landscape, intuitive parameter tuning, and consistent frame-by-frame performance on video.

Universal Style Transfer via Feature Transforms

TLDR
The key ingredient of the method is a pair of feature transforms, whitening and coloring, that are embedded to an image reconstruction network that reflects a direct matching of feature covariance of the content image to a given style image.

Artistic Style Transfer for Videos and Spherical Images

TLDR
A deep network architecture and training procedures are proposed that allow us to stylize arbitrary-length videos in a consistent and stable way, and nearly in real time, and it is shown that the proposed methods clearly outperform simpler baselines both qualitatively and quantitatively.

Adjustable Real-time Style Transfer

TLDR
A novel method is proposed which allows adjustment of crucial hyper-parameters, after the training and in real-time, through a set of manually adjustable parameters, which enable the user to modify the synthesized outputs from the same pair of style/content images, in search of a favorite stylized image.

Multimodal Transfer: A Hierarchical Deep Convolutional Neural Network for Fast Artistic Style Transfer

TLDR
A multimodal convolutional neural network is proposed that takes into consideration faithful representations of both color and luminance channels, and performs stylization hierarchically with multiple losses of increasing scales, and can perform style transfer in nearly real-time by performing much more sophisticated training offline.

Image Style Transfer Using Convolutional Neural Networks

TLDR
A Neural Algorithm of Artistic Style is introduced that can separate and recombine the image content and style of natural images and provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.

Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses

TLDR
This paper first gives a mathematical explanation of the source of instabilities in many previous approaches, and then improves these instabilities by using histogram losses to synthesize textures that better statistically match the exemplar.

Artistic Style Transfer for Videos

TLDR
This work presents an approach that transfers the style from one image (for example, a painting) to a whole video sequence, and makes use of recent advances in style transfer in still images and proposes new initializations and loss functions applicable to videos.

Perceptual Losses for Real-Time Style Transfer and Super-Resolution

TLDR
This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.