Name Your Style: An Arbitrary Artist-aware Image Style Transfer
@article{Liu2022NameYS, title={Name Your Style: An Arbitrary Artist-aware Image Style Transfer}, author={Zhi-Song Liu and Li-Wen Wang and W. C. Siu and Vicky S. Kalogeiton}, journal={ArXiv}, year={2022}, volume={abs/2202.13562} }
Image style transfer has attracted widespread attention in the past few years. Despite its remarkable results, it requires additional style images available as references, making it less flexible and inconvenient. Using text is the most natural way to describe the style. More importantly, text can describe implicit abstract styles, like styles of specific artists or art movements. In this paper, we propose a text-driven image style transfer (TxST) that leverages advanced image-text encoders to…
Figures and Tables from this paper
7 Citations
SINE: SINgle Image Editing with Text-to-Image Diffusion Models
- Computer ScienceArXiv
- 2022
A novel model-based guidance built upon the classifier-free guidance is proposed so that the knowledge from the model trained on a single image can be distilled into the pre-trained diffusion model, enabling content creation even with one given image.
Inversion-Based Creativity Transfer with Diffusion Models
- ArtArXiv
- 2022
In this paper, we introduce the task of “Creativity Trans-fer”. The artistic creativity within a painting is the means of expression, which includes not only the painting material, colors, and…
CLIPTexture: Text-Driven Texture Synthesis
- Computer ScienceACM Multimedia
- 2022
A novel texture synthesis framework based on the CLIP is proposed, which models the texture synthesis problem as an optimization process and realizes text-driven texture synthesis by minimizing the distance between the input image and the text prompt in latent space.
FastCLIPStyler: Towards fast text-based image style transfer using style representation
- Computer ScienceArXiv
- 2022
This work demonstrates how combining CLIPStyler with a pre-trained, purely vision-based style transfer model can significantly reduce the inference time of CLIPSTYler and argues that this model also has merits in terms of the visual aesthetics of the generated images.
FastCLIPstyler: Optimisation-free Text-based Image Style Transfer Using Style Representations
- Computer Science
- 2022
A generalised text-based style transfer network capable of stylising images in a single forward pass for an arbitrary text input is created, making the image stylisation pro-cess around 1000 times more efficient than CLIPstyler.
An Image is Worth One Word: Personalizing Text-to-Image Generation using Textual Inversion
- Computer ScienceArXiv
- 2022
This work uses only 3 - 5 images of a user-provided concept to represent it through new “words” in the embedding space of a frozen text-to-image model, which can be composed into natural language sentences, guiding personalized creation in an intuitive way.
Mimetic Models: Ethical Implications of AI that Acts Like You
- Computer ScienceAIES
- 2022
This framework includes a number of distinct scenarios for the use of mimetic models, and considers the impacts on a range of different participants, including the target being modeled, the operator who deploys the model, and the entities that interact with it.
References
SHOWING 1-10 OF 55 REFERENCES
CLIPstyler: Image Style Transfer with a Single Text Condition
- Computer Science2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2022
A patch-wise text-image matching loss with multiview augmentations for realistic texture transfer that enables a style transfer ‘without’ a style image, but only with a text description of the desired style.
Language-Driven Image Style Transfer
- Computer ScienceArXiv
- 2021
This work proposes contrastive language visual artist (CLVA) that learns to extract visual semantics from style instructions and accomplish LDAST by the patch-wise style discriminator, and compares contrastive pairs of content images and style instructions to improve the mutual relativeness.
Domain-Aware Universal Style Transfer
- Computer Science2021 IEEE/CVF International Conference on Computer Vision (ICCV)
- 2021
A unified architecture, Domain-aware Style Transfer Networks (DSTN) that transfer not only the style but also the property of domain from a given reference image, and designs a novel domainness indicator that captures the domainness value from the texture and structural features of reference images.
Arbitrary Style Transfer With Style-Attentional Networks
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
A novel style-attentional network (SANet) that efficiently and flexibly integrates the local style patterns according to the semantic spatial distribution of the content image and preserves the content structure as much as possible while enriching the style patterns.
DualAST: Dual Style-Learning Networks for Artistic Style Transfer
- Art2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2021
A novel Dual Style-Learning Artistic Style Transfer (DualAST) framework to learn simultaneously both the holistic artist-style and specific artwork-style from a single style image, which confirms the superiority of this method.
Fast Patch-based Style Transfer of Arbitrary Style
- Computer ScienceArXiv
- 2016
A simpler optimization objective based on local matching that combines the content structure and style textures in a single layer of the pretrained network is proposed that has desirable properties such as a simpler optimization landscape, intuitive parameter tuning, and consistent frame-by-frame performance on video.
A Content Transformation Block for Image Style Transfer
- Computer Science2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
- 2019
This paper introduces a content transformation module between the encoder and decoder and utilizes similar content appearing in photographs and style samples to learn how style alters content details and generalizes this to other class details.
Avatar-Net: Multi-scale Zero-Shot Style Transfer by Feature Decoration
- Computer Science2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
- 2018
This paper proposes an efficient yet effective Avatar-Net that enables visually plausible multi-scale transfer for arbitrary style in real-time and demonstrates the state-of-the-art effectiveness and efficiency of the proposed method in generating high-quality stylized images.
StyleCLIPDraw: Coupling Content and Style in Text-to-Drawing Synthesis
- Computer Science, ArtArXiv
- 2021
StyleCLIPDraw is introduced which adds a style loss to the CLIPDraw text-to-drawing synthesis model to allow artistic control of the syn-thesized drawings in addition to Control of the content via text.
A Style-Aware Content Loss for Real-time HD Style Transfer
- Computer ScienceECCV
- 2018
A style-aware content loss is proposed, which is trained jointly with a deep encoder-decoder network for real-time, high-resolution stylization of images and videos and results show that this approach better captures the subtle nature in which a style affects content.