• Publications
  • Influence
Universal Style Transfer via Feature Transforms
TLDR
The key ingredient of the method is a pair of feature transforms, whitening and coloring, that are embedded to an image reconstruction network that reflects a direct matching of feature covariance of the content image to a given style image.
A Closed-form Solution to Photorealistic Image Stylization
TLDR
The results show that the proposed method generates photorealistic stylization outputs that are more preferred by human subjects as compared to those by the competing methods while running much faster.
Deep Joint Image Filtering
TLDR
This paper proposes a learning-based approach to construct a joint filter based on Convolutional Neural Networks that can selectively transfer salient structures that are consistent in both guidance and target images and validate the effectiveness of the proposed joint filter through extensive comparisons with state-of-the-art methods.
Generative Face Completion
TLDR
This paper demonstrates qualitatively and quantitatively that the proposed effective face completion algorithm is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.
Diversified Texture Synthesis with Feed-Forward Networks
TLDR
A deep generative feed-forward network is proposed which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them and a suite of important techniques are introduced to achieve better convergence and diversity.
Efficient Saliency-Model-Guided Visual Co-Saliency Detection
TLDR
Experimental results on two benchmark databases demonstrate that the proposed framework outperforms the state-of-the-art models in terms of both accuracy and efficiency.
Joint Image Filtering with Deep Convolutional Networks
TLDR
This paper proposes a learning-based approach for constructing joint filters based on Convolutional Neural Networks and shows that the model trained on a certain type of data, e.g., RGB and depth images, generalizes well to other modalities.
Flow-Grounded Spatial-Temporal Video Prediction from Still Images
TLDR
This work forms the multi-frame prediction task as a multiple time step flow (multi-flow) prediction phase followed by a flow-to-frame synthesis phase, which prevents the model from directly looking at the high-dimensional pixel space of the frame sequence and is demonstrated to be more effective in predicting better and diverse results.
Few-shot Image Generation via Cross-domain Correspondence
TLDR
This work seeks to utilize a large source domain for pretraining and transfer the diversity information from source to target and proposes to preserve the relative similarities and differences between instances in the source via a novel cross-domain distance consistency loss.
Collaborative Distillation for Ultra-Resolution Universal Style Transfer
TLDR
A new knowledge distillation method for encoder-decoder based neural style transfer to reduce the convolutional filters and achieves ultra-resolution (over 40 megapixels) universal style transfer on a 12GB GPU for the first time.
...
1
2
3
4
5
...