Paint Transformer: Feed Forward Neural Painting with Stroke Prediction

@article{Liu2021PaintTF,
  title={Paint Transformer: Feed Forward Neural Painting with Stroke Prediction},
  author={Songhua Liu and Tianwei Lin and Dongliang He and Fu Li and Rui Deng and Xin Li and Errui Ding and Hao Wang},
  journal={2021 IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021},
  pages={6578-6587}
}
Neural painting refers to the procedure of producing a series of strokes for a given image and non-photo-realistically recreating it using neural networks. While reinforcement learning (RL) based agents can generate a stroke sequence step by step for this task, it is not easy to train a stable RL agent. On the other hand, stroke optimization methods search for a set of stroke parameters iteratively in a large search space; such low efficiency significantly limits their prevalence and… 

Figures and Tables from this paper

Stroke-GAN Painter: Learning to Paint Artworks Using Stroke-Style Generative Adversarial Networks

A Stroke-Style Generative Adversarial Network (Stroke-GAN) that learns styles of strokes from different stroke-style datasets and produces diverse stroke styles and demonstrates that the artful painter generates different styles of paintings while well-preserving content details and retaining a higher similarity to the given images.

Intelli-Paint: Towards Developing Human-like Painting Agents

A novel painting approach which learns to generate output canvases while exhibiting a more human-like painting style and a brushstroke regularization strategy which allows for „60-80% reduction in the total number of required brushstrokes without any perceivable differences in the quality of the generated canvases".

Intelli-Paint: Towards Developing More Human-Intelligible Painting Agents

This work motivates the need to learn more human-intelligible painting sequences in order to facilitate the use of autonomous painting systems in a more interactive context and proposes a novel painting approach which learns to generate output canvases while exhibiting a painting style which is more relatable to human users.

Neural Brushstroke Engine

The Neural Brushstroke Engine is proposed, the first method to apply deep generative models to learn a distribution of interactive drawing tools, and it is shown that the latent space learned by the model generalizes to unseen drawing and more experimental styles by embedding real styles into the latentspace.

Robot Learning to Paint from Demonstrations

The core idea lies in allowing the robot to learn continuous stroke-level skills that jointly encodes action trajectories and painted outcomes from an extensive collection of human demonstrations.

Abstract Painting Synthesis via Decremental optimization

A painting synthesis method that uses a CLIP (Contrastive-Language-Image-Pretraining) model to build a semantically-aware metric so that the cross-domain semantic similarity is explicitly involved to ensure the convergence of the objective function is proposed.

Paint2Pix: Interactive Painting based Progressive Image Synthesis and Editing

This paper proposes a novel approach paint2pix, which learns to predict (and adapt) “what a user wants to draw” from rudimentary brushstroke inputs, by learning a mapping from the manifold of incomplete human paintings to their realistic renderings.

ShadowPainter: Active Learning Enabled Robotic Painting through Visual Measurement and Reproduction of the Artistic Creation Process

In this paper, we present an active learning enabled robotic painting system, called ShadowPainter, which acquires artist-specific painting information from the artwork creating process and achieves

High-Fidelity Guided Image Synthesis with Latent Diffusion Models

A novel guided image synthesis framework is proposed, which addresses this problem by modelling the output image as the solution of a constrained optimization problem and shows that while computing an exact solution to the optimization is infeasible, an approximation can be achieved while just requiring a single pass of the reverse diffusion process.

Painting Algorithms

This document presents new approaches for painting algorithms based on optimization methods and learning models. My thesis focuses on artistic stylization of images under a stroke-based painting

References

SHOWING 1-10 OF 37 REFERENCES

StrokeNet: A Neural Painting Environment

StrokeNet is presented, a novel model where the agent is trained upon a wellcrafted neural approximation of the painting environment, and was able to learn to write characters such as MNIST digits faster than reinforcement learning approaches in an unsupervised manner.

Learning to Sketch with Deep Q Networks and Demonstrated Strokes

A two-stage learning framework to teach a machine to doodle in a simulated painting environment via Stroke Demonstration and deep Q-learning (SDQ), which generates a sequence of pen actions to reproduce a reference drawing and mimics the behavior of human painters.

Neural Painters: A learned differentiable constraint for generating brushstroke paintings

It is shown that when training an agent to "paint" images using brushstrokes, using a differentiable neural painter leads to much faster convergence, and a method is proposed for encouraging this agent to follow human-like strokes when reconstructing digits.

Learning to Paint With Model-Based Deep Reinforcement Learning

We show how to teach machines to paint like human painters, who can use a small number of strokes to create fantastic paintings. By employing a neural renderer in model-based Deep Reinforcement

Artist Agent: A Reinforcement Learning Approach to Automatic Stroke Generation in Oriental Ink Painting

This work proposes to model a brush as a reinforcement learning agent, and learn desired brush-trajectories by maximizing the sum of rewards in the policy search framework to automatically generate smooth and natural brush strokes in oriental ink painting.

Synthesizing Programs for Images using Reinforced Adversarial Learning

SPIRAL is an adversarially trained agent that generates a program which is executed by a graphics engine to interpret and sample images, and a surprising finding is that using the discriminator's output as a reward signal is the key to allow the agent to make meaningful progress at matching the desired output rendering.

Perceptual Losses for Real-Time Style Transfer and Super-Resolution

This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.

APDrawingGAN: Generating Artistic Portrait Drawings From Face Photos With Hierarchical GANs

This work proposes APDrawingGAN, a novel GAN based architecture that builds upon hierarchical generators and discriminators combining both a global network (for images as a whole) and local networks (for individual facial regions) that allows dedicated drawing strategies to be learned for different facial features.

CartoonGAN: Generative Adversarial Networks for Photo Cartoonization

Experimental results show that the proposed CartoonGAN method is able to generate high-quality cartoon images from real-world photos and outperforms state-of-the-art methods and is much more efficient to train than existing methods.

Learning to Cartoonize Using White-Box Cartoon Representations

  • Xinrui WangJinze Yu
  • Computer Science
    2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2020
This paper proposes to separately identify three white-box representations from images: the surface representation that contains smooth surface of cartoon images, the structure representation that refers to the sparse color-blocks and flatten global content in the celluloid style workflow, and the texture representation that reflects high-frequency texture, contours and details in cartoon images.