Share This Author
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
- C. Ledig, Lucas Theis, Wenzhe Shi
- Computer ScienceIEEE Conference on Computer Vision and Pattern…
- 15 September 2016
SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss.
Lossy Image Compression with Compressive Autoencoders
It is shown that minimal changes to the loss are sufficient to train deep autoencoders competitive with JPEG 2000 and outperforming recently proposed approaches based on RNNs, and furthermore computationally efficient thanks to a sub-pixel architecture, which makes it suitable for high-resolution images.
A note on the evaluation of generative models
This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models and shows that three of the currently most commonly used criteria---average log-likelihood, Parzen window estimates, and visual fidelity of samples---are largely independent of each other when the data is high-dimensional.
HoloGAN: Unsupervised Learning of 3D Representations From Natural Images
- Thu Nguyen-Phuoc, Chuan Li, Lucas Theis, Christian Richardt, Yong-Liang Yang
- Computer ScienceIEEE/CVF International Conference on Computer…
- 2 April 2019
HoloGAN is the first generative model that learns 3D representations from natural images in an entirely unsupervised manner and is shown to be able to generate images with similar or higher visual quality than other generative models.
Amortised MAP Inference for Image Super-resolution
- C. Sønderby, Jose Caballero, Lucas Theis, Wenzhe Shi, Ferenc Huszár
- Computer ScienceICLR
- 14 October 2016
A novel neural network architecture is introduced that performs a projection to the affine subspace of valid SR solutions ensuring that the high resolution output of the network is always consistent with the low resolution input, and it is shown that the GAN based approach performs best on real image data.
Deep Gaze I: Boosting Saliency Prediction with Feature Maps Trained on ImageNet
This work presents a novel way of reusing existing neural networks that have been pretrained on the task of object recognition in models of fixation prediction that significantly outperforms all state-of-the-art models on the MIT Saliency Benchmark.
Fast Face-Swap Using Convolutional Neural Networks
- I. Korshunova, Wenzhe Shi, J. Dambre, Lucas Theis
- Computer ScienceIEEE International Conference on Computer Vision…
- 29 November 2016
A new loss function is devised that enables the network to produce highly photorealistic results by making face swap work in real-time with no input from the user.
Generative Image Modeling Using Spatial LSTMs
This work introduces a recurrent image model based on multidimensional long short-term memory units which is particularly suited for image modeling due to their spatial structure and outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.
Faster gaze prediction with dense networks and Fisher pruning
Through a combination of knowledge distillation and Fisher pruning, this paper obtains much more runtime-efficient architectures for saliency prediction, achieving a 10x speedup for the same AUC performance as a state of the art network on the CAT2000 dataset.