Ancient Painting to Natural Image: A New Solution for Painting Processing

@article{Qiao2019AncientPT,
  title={Ancient Painting to Natural Image: A New Solution for Painting Processing},
  author={Tingting Qiao and Weijing Zhang and Miao Zhang and Zixuan Ma and Duanqing Xu},
  journal={2019 IEEE Winter Conference on Applications of Computer Vision (WACV)},
  year={2019},
  pages={521-530}
}
Collecting a large-scale and well-annotated dataset for image processing has become a common practice in computer vision. However, in the ancient painting area, this task is not practical as the number of paintings is limited and their style is greatly diverse. We, therefore, propose a novel solution for the problems that come with ancient painting processing. This is to use domain transfer to convert ancient paintings to photo-realistic natural images. By doing so, the "ancient painting… Expand
New Challenges of Face Detection in Paintings based on Deep Learning
TLDR
A performance analysis of three CNN architectures, namely, VGG16, ResNet50 and ResNet101, as backbone networks of one of the most popular CNN based object detector, Faster RCNN, to boost-up the face detection performance. Expand
MUSE: Textual Attributes Guided Portrait Painting Generation
  • Xiaodan Hu, Pengfei Yu, Kevin Knight, Heng Ji, Bo Li, Honghui Shi
  • Computer Science
  • 2021 IEEE 4th International Conference on Multimedia Information Processing and Retrieval (MIPR)
  • 2021
TLDR
A novel approach to automatically generate portrait paintings guided by textual attributes takes a set of attributes written in text, in addition to facial features extracted from a photo of the subject as input, and designs a novel stacked neural network architecture by extending an image-to-image generative model to accept textual attributes. Expand
MirrorGAN: Learning Text-To-Image Generation by Redescription
TLDR
Thorough experiments on two public benchmark datasets demonstrate the superiority of MirrorGAN over other representative state-of-the-art methods. Expand
Few-shot Image Generation with Elastic Weight Consolidation
TLDR
This work adapts a pretrained model, without introducing any additional parameters, to the few examples of the target domain, in order to best preserve the information of the source dataset, while fitting the target. Expand
Unsupervised Cross-Modal Retrieval by Coupled Dual Generative Adversarial Networks
TLDR
This paper addresses the unsupervised cross-modal retrieval problem using a novel framework called coupled dual generative adversarial networks (CDGAN), which can well match images and sentences with complex content, and it can achieve the state-of-the-art cross- modal retrieval results on two popular benchmark datasets. Expand
MUSE: Illustrating Textual Attributes by Portrait Generation
TLDR
A novel approach, MUSE, to illustrate textual attributes visually via portrait generation by extending an image-to-image generative model to accept textual attributes and proposes a new attribute reconstruction metric to evaluate whether the generated portraits preserve the subject's attributes. Expand
Promising Generative Adversarial Network Based Sinogram Inpainting Method for Ultra-Limited-Angle Computed Tomography Imaging
TLDR
The sinogram-inpainting-GAN (SI-GAN) is proposed to restore missing sinogram data to suppress the singularity of the truncated sinogram for ultra-limited-angle reconstruction, and the U-Net generator and patch-design discriminator in SI-GAN are proposed to make the network suitable for standard medical CT images. Expand
AI Radar Sensor: Creating Radar Depth Sounder Images Based on Generative Adversarial Network
TLDR
The experiments show that synthetic radar images generated by generative adversarial network (GAN) can be used in combination with real images for data augmentation and training of deep neural networks. Expand
Generative adversarial network-based sinogram super-resolution for computed tomography imaging.
TLDR
A novel sinogram-super-resolution generative adversarial network (SSR-GAN) model is proposed to obtain high-resolution (HR) sinograms from LR sinograms, thereby improving the reconstruction image quality under the 2×2 acquisition mode. Expand

References

SHOWING 1-10 OF 37 REFERENCES
Painting Image Classification Using Online Learning Algorithm
TLDR
A simple, yet powerful on-line learning algorithm to classify the category of painting images by using the multi-features combining of local and global features as the image descriptor, and then K-means is applied to initialize the dictionary. Expand
Multi-View Feature Combination for Ancient Paintings Chronological Classification
TLDR
A novel computational method by using multi-view local color features extracted from the paintings to determine the era in which a painting was created and the advantage of the proposed features, especially in the case of small-size training samples. Expand
Fast Patch-based Style Transfer of Arbitrary Style
TLDR
A simpler optimization objective based on local matching that combines the content structure and style textures in a single layer of the pretrained network is proposed that has desirable properties such as a simpler optimization landscape, intuitive parameter tuning, and consistent frame-by-frame performance on video. Expand
Genre and Style Based Painting Classification
TLDR
The problem of feature extraction on the paintings and classification of paintings into their genres and styles is explored and a comparison to existing feature extraction and classification methods as well as an analysis of the author's own approach across different feature vectors are included. Expand
Image Style Transfer Using Convolutional Neural Networks
TLDR
A Neural Algorithm of Artistic Style is introduced that can separate and recombine the image content and style of natural images and provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation. Expand
Perceptual Losses for Real-Time Style Transfer and Super-Resolution
TLDR
This work considers image transformation problems, and proposes the use of perceptual loss functions for training feed-forward networks for image transformation tasks, and shows results on image style transfer, where aFeed-forward network is trained to solve the optimization problem proposed by Gatys et al. in real-time. Expand
Unsupervised Pixel-Level Domain Adaptation with Generative Adversarial Networks
TLDR
This generative adversarial network (GAN)-based method adapts source-domain images to appear as if drawn from the target domain, and outperforms the state-of-the-art on a number of unsupervised domain adaptation scenarios by large margins. Expand
Rethinking Atrous Convolution for Semantic Image Segmentation
TLDR
The proposed `DeepLabv3' system significantly improves over the previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark. Expand
DualGAN: Unsupervised Dual Learning for Image-to-Image Translation
TLDR
A novel dual-GAN mechanism is developed, which enables image translators to be trained from two sets of unlabeled images from two domains, and can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data. Expand
Image-to-Image Translation with Conditional Adversarial Networks
TLDR
Conditional adversarial networks are investigated as a general-purpose solution to image-to-image translation problems and it is demonstrated that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Expand
...
1
2
3
4
...