Corpus ID: 189928525

LPaintB: Learning to Paint from Self-SupervisionLPaintB: Learning to Paint from Self-Supervision

@article{Jia2019LPaintBLT,
  title={LPaintB: Learning to Paint from Self-SupervisionLPaintB: Learning to Paint from Self-Supervision},
  author={Biao Jia and Jonathan Brandt and Radom{\'i}r Mech and Byungmoon Kim and Dinesh Manocha},
  journal={ArXiv},
  year={2019},
  volume={abs/1906.06841}
}
We present a novel reinforcement learning-based natural media painting algorithm. Our goal is to reproduce a reference image using brush strokes and we encode the objective through observations. Our formulation takes into account that the distribution of the reward in the action space is sparse and training a reinforcement learning algorithm from scratch can be difficult. We present an approach that combines self-supervised learning and reinforcement learning to effectively transfer negative… Expand

References

SHOWING 1-10 OF 40 REFERENCES
PaintBot: A Reinforcement Learning Approach for Natural Media Painting
TLDR
A new automated digital painting framework, based on a painting agent trained through reinforcement learning, which can learn an effective policy with a high dimensional continuous action space comprising pen pressure, width, tilt, and color, for a variety of painting styles. Expand
Artist Agent: A Reinforcement Learning Approach to Automatic Stroke Generation in Oriental Ink Painting
TLDR
This work proposes to model a brush as a reinforcement learning agent, and learn desired brush-trajectories by maximizing the sum of rewards in the policy search framework to automatically generate smooth and natural brush strokes in oriental ink painting. Expand
Playing Atari with Deep Reinforcement Learning
TLDR
This work presents the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning, which outperforms all previous approaches on six of the games and surpasses a human expert on three of them. Expand
StrokeNet: A Neural Painting Environment
TLDR
StrokeNet is presented, a novel model where the agent is trained upon a wellcrafted neural approximation of the painting environment, and was able to learn to write characters such as MNIST digits faster than reinforcement learning approaches in an unsupervised manner. Expand
Grasp2Vec: Learning Object Representations from Self-Supervised Grasping
TLDR
This paper studies how to acquire effective object-centric representations for robotic manipulation tasks without human labeling by using autonomous robot interaction with the environment using self-supervised methods. Expand
Stroke-Based Stylization Learning and Rendering with Inverse Reinforcement Learning
TLDR
An AI-aided art authoring (A4) system of non-photorealistic rendering that allows users to automatically generate brush stroke paintings in a specific artist's style by inverse reinforcement learning is developed. Expand
Generating Videos with Scene Dynamics
TLDR
A generative adversarial network for video with a spatio-temporal convolutional architecture that untangles the scene's foreground from the background is proposed that can generate tiny videos up to a second at full frame rate better than simple baselines. Expand
Revisiting Self-Supervised Visual Representation Learning
TLDR
This study revisits numerous previously proposed self-supervised models, conducts a thorough large scale study and uncovers multiple crucial insights about standard recipes for CNN design that do not always translate to self- supervised representation learning. Expand
Scribbler: Controlling Deep Image Synthesis with Sketch and Color
TLDR
A deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces is proposed and demonstrates a sketch based image synthesis system which allows users to scribble over the sketch to indicate preferred color for objects. Expand
Unsupervised Visual Representation Learning by Context Prediction
TLDR
It is demonstrated that the feature representation learned using this within-image context indeed captures visual similarity across images and allows us to perform unsupervised visual discovery of objects like cats, people, and even birds from the Pascal VOC 2011 detection dataset. Expand
...
1
2
3
4
...