Ancient Painting to Natural Image: A New Solution for Painting Processing

  title={Ancient Painting to Natural Image: A New Solution for Painting Processing},
  author={Tingting Qiao and Weijing Zhang and Miao Zhang and Zixuan Ma and Duanqing Xu},
  journal={2019 IEEE Winter Conference on Applications of Computer Vision (WACV)},
Collecting a large-scale and well-annotated dataset for image processing has become a common practice in computer vision. However, in the ancient painting area, this task is not practical as the number of paintings is limited and their style is greatly diverse. We, therefore, propose a novel solution for the problems that come with ancient painting processing. This is to use domain transfer to convert ancient paintings to photo-realistic natural images. By doing so, the "ancient painting… 

Figures and Tables from this paper

New Challenges of Face Detection in Paintings based on Deep Learning

A performance analysis of three CNN architectures, namely, VGG16, ResNet50 and ResNet101, as backbone networks of one of the most popular CNN based object detector, Faster RCNN, to boost-up the face detection performance.

MUSE: Textual Attributes Guided Portrait Painting Generation

A novel approach to automatically generate portrait paintings guided by textual attributes takes a set of attributes written in text, in addition to facial features extracted from a photo of the subject as input, and designs a novel stacked neural network architecture by extending an image-to-image generative model to accept textual attributes.

MirrorGAN: Learning Text-To-Image Generation by Redescription

Thorough experiments on two public benchmark datasets demonstrate the superiority of MirrorGAN over other representative state-of-the-art methods.

Few-shot Image Generation with Elastic Weight Consolidation

This work adapts a pretrained model, without introducing any additional parameters, to the few examples of the target domain, in order to best preserve the information of the source dataset, while fitting the target.

Unsupervised Cross-Modal Retrieval by Coupled Dual Generative Adversarial Networks

This paper addresses the unsupervised cross-modal retrieval problem using a novel framework called coupled dual generative adversarial networks (CDGAN), which can well match images and sentences with complex content, and it can achieve the state-of-the-art cross- modal retrieval results on two popular benchmark datasets.

MUSE: Illustrating Textual Attributes by Portrait Generation

A novel approach, MUSE, to illustrate textual attributes visually via portrait generation by extending an image-to-image generative model to accept textual attributes and proposes a new attribute reconstruction metric to evaluate whether the generated portraits preserve the subject's attributes.

New Method for Museum Archiving: “Quantitative Analysis Meets Art History”

  • Minseok Kim
  • Art
    Journal on Computing and Cultural Heritage
  • 2022
As museums are encouraged to explore new ways to generate digital content, and quantitative methods are being used to suggest new angles and important analysis tools for art-historical research and

Promising Generative Adversarial Network Based Sinogram Inpainting Method for Ultra-Limited-Angle Computed Tomography Imaging

The sinogram-inpainting-GAN (SI-GAN) is proposed to restore missing sinogram data to suppress the singularity of the truncated sinogram for ultra-limited-angle reconstruction, and the U-Net generator and patch-design discriminator in SI-GAN are proposed to make the network suitable for standard medical CT images.

AI Radar Sensor: Creating Radar Depth Sounder Images Based on Generative Adversarial Network

The experiments show that synthetic radar images generated by generative adversarial network (GAN) can be used in combination with real images for data augmentation and training of deep neural networks.

Generative adversarial network-based sinogram super-resolution for computed tomography imaging.

A novel sinogram-super-resolution generative adversarial network (SSR-GAN) model is proposed to obtain high-resolution (HR) sinograms from LR sinograms, thereby improving the reconstruction image quality under the 2×2 acquisition mode.



Painting Image Classification Using Online Learning Algorithm

A simple, yet powerful on-line learning algorithm to classify the category of painting images by using the multi-features combining of local and global features as the image descriptor, and then K-means is applied to initialize the dictionary.

Multi-View Feature Combination for Ancient Paintings Chronological Classification

A novel computational method by using multi-view local color features extracted from the paintings to determine the era in which a painting was created and the advantage of the proposed features, especially in the case of small-size training samples.

Fast Patch-based Style Transfer of Arbitrary Style

A simpler optimization objective based on local matching that combines the content structure and style textures in a single layer of the pretrained network is proposed that has desirable properties such as a simpler optimization landscape, intuitive parameter tuning, and consistent frame-by-frame performance on video.

Genre and Style Based Painting Classification

The problem of feature extraction on the paintings and classification of paintings into their genres and styles is explored and a comparison to existing feature extraction and classification methods as well as an analysis of the author's own approach across different feature vectors are included.

Image Style Transfer Using Convolutional Neural Networks

A Neural Algorithm of Artistic Style is introduced that can separate and recombine the image content and style of natural images and provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.

Perceptual Losses for Real-Time Style Transfer and Super-Resolution

This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time.

Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks

This generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain, and outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins.

Rethinking Atrous Convolution for Semantic Image Segmentation

The proposed `DeepLabv3' system significantly improves over the previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.

DualGAN: Unsupervised Dual Learning for Image-to-Image Translation

A novel dual-GAN mechanism is developed, which enables image translators to be trained from two sets of unlabeled images from two domains, and can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.

Image-to-Image Translation with Conditional Adversarial Networks

Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks.