DeepRemaster: temporal source-reference attention networks for comprehensive video enhancement

@article{Iizuka2019DeepRemasterTS,
  title={DeepRemaster: temporal source-reference attention networks for comprehensive video enhancement},
  author={Satoshi Iizuka and Edgar Simo-Serra},
  journal={ACM Trans. Graph.},
  year={2019},
  volume={38},
  pages={176:1-176:13}
}
The remastering of vintage film comprises of a diversity of sub-tasks including super-resolution, noise removal, and contrast enhancement which aim to restore the deteriorated film medium to its original state. Additionally, due to the technical limitations of the time, most vintage film is either recorded in black and white, or has low quality colors, for which colorization becomes necessary. In this work, we propose a single framework to tackle the entire remastering task semi-interactively… 
Deep Colorization: A Channel Attention-based CNN for Video Colorization
TLDR
This work proposes an end-to-end framework based on temporal convolutional neural networks with attention mechanisms that can colorize multiple frames at the same time and can handle long video sequences colorization by giving only one frame as a reference.
Deep Video Prior for Video Consistency and Propagation
TLDR
This work shows that temporal consistency can be achieved by training a convolutional network on a video with Deep Video Prior (DVP), and shows its effectiveness in propagating three different types of information (color, artistic style, and object segmentation).
Bringing Old Films Back to Life
TLDR
A learning-based framework, recurrent transformer network (RTN), to restore heavily degraded old films, based on the hidden knowledge learned from adjacent frames that contain abundant information about the occlu-sion to restore challenging artifacts of each frame while ensuring temporal coherency.
Image Harmonization with Attention-based Deep Feature Modulation
TLDR
A novel attentionbased module is proposed that aligns the standard deviation of the foreground features with that of the background features, capturing global dependencies in the entire image.
Color2Style: Real-Time Exemplar-Based Image Colorization with Self-Reference Learning and Deep Feature Modulation
TLDR
A deep exemplar-based image colorization approach named Color2Style to resurrect these grayscale image media by filling them with vibrant colors, achieving appealing performance with real-time processing speed and surpassing other state-of-art methods in qualitative comparison and user study.
SCSNet: An Efficient Paradigm for Learning Simultaneously Image Colorization and Super-Resolution
TLDR
An efficient paradigm to perform Simultaneously Image Colorization and Super-resolution (SCS) and an end-to-end SCSNet to achieve this goal is presented and the superiority of the method for generating authentic images over state-of-theart methods is demonstrated.
Color2Embed: Fast Exemplar-Based Image Colorization using Color Embeddings
TLDR
This paper presents a fast exemplar-based image colorization approach using color embeddings named Color2Embed, which adopts a self-augmented self-reference learning scheme, where the reference image is generated by graphical transformations from the original colorful one whereby the training can be formulated in a paired manner.
Legacy Photo Editing with Learned Noise Prior
TLDR
This work proposes an IEGAN framework performing image editing including joint denoising, inpainting and colorization based on the estimated noise prior and evaluates the proposed system and compares it with state-of-the-art image enhancement methods.
SCGAN: Saliency Map-Guided Colorization With Generative Adversarial Network
TLDR
Experimental results show that SCGAN can generate more reasonable colorized images than state-of-the-art techniques and proposes a novel saliency map-based guidance method.
Towards Vivid and Diverse Image Colorization with Generative Color Prior
TLDR
This work aims at recovering vivid colors by leveraging the rich and diverse color priors encapsulated in a pretrained Generative Adversarial Networks (GAN) via a GAN encoder and incorporating these features into the colorization process with feature modulations.
...
1
2
...

References

SHOWING 1-10 OF 55 REFERENCES
Learning Blind Video Temporal Consistency
TLDR
An efficient approach based on a deep recurrent network for enforcing temporal consistency in a video that can handle multiple and unseen tasks, including but not limited to artistic style transfer, enhancement, colorization, image-to-image translation and intrinsic image decomposition.
Spatio-Temporal Transformer Network for Video Restoration
TLDR
A novel Spatio-temporal Transformer Network (STTN) is proposed which handles multiple frames at once and thereby manages to mitigate the common nuisance of occlusions in optical flow estimation.
Switchable Temporal Propagation Network
TLDR
This paper proposes a learnable unified framework for propagating a variety of visual properties of video images, including but not limited to color, high dynamic range (HDR), and segmentation information, where the properties are available for only a few key-frames.
FFDNet: Toward a Fast and Flexible Solution for CNN-Based Image Denoising
TLDR
The proposed FFDNet works on downsampled sub-images, achieving a good trade-off between inference speed and denoising performance, and enjoys several desirable properties, including the ability to handle a wide range of noise levels effectively with a single network.
Image Transformer
TLDR
This work generalizes a recently proposed model architecture based on self-attention, the Transformer, to a sequence modeling formulation of image generation with a tractable likelihood, and significantly increases the size of images the model can process in practice, despite maintaining significantly larger receptive fields per layer than typical convolutional neural networks.
Universal Denoising Networks : A Novel CNN Architecture for Image Denoising
  • Stamatios Lefkimmiatis
  • Computer Science
    2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
  • 2018
TLDR
A novel network architecture for learning discriminative image models that are employed to efficiently tackle the problem of grayscale and color image denoising is designed and two different variants are introduced, which achieve excellent results under additive white Gaussian noise.
Deep exemplar-based colorization
TLDR
This work proposes the first deep learning approach for exemplar-based local colorization, which performs robustly and generalizes well even when using reference images that are unrelated to the input grayscale image.
Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
TLDR
This paper presents the first convolutional neural network capable of real-time SR of 1080p videos on a single K2 GPU and introduces an efficient sub-pixel convolution layer which learns an array of upscaling filters to upscale the final LR feature maps into the HR output.
Deep Video Color Propagation
TLDR
This work proposes a deep learning framework for color propagation that combines a local strategy, to propagate colors frame-by-frame ensuring temporal stability, and a global strategy, using semantics for color propagate within a longer range.
Denoising with kernel prediction and asymmetric loss functions
TLDR
A theoretical analysis of convergence rates of kernel-predicting architectures is presented, shedding light on why kernel prediction performs better than synthesizing the colors directly, complementing the empirical evidence presented in this and previous works.
...
1
2
3
4
5
...