• Publications
  • Influence
Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels
TLDR
A theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE are presented and can be readily applied with any existing DNN architecture and algorithm, while yielding good performance in a wide range of noisy label scenarios.
NTIRE 2020 Challenge on Real Image Denoising: Dataset, Methods and Results
TLDR
This paper reviews the NTIRE 2020 challenge on real image denoising with focus on the newly introduced dataset, the proposed methods and their results, based on the SIDD benchmark.
Self-Distillation as Instance-Specific Label Smoothing
TLDR
A new interpretation for teacher-student training as amortized MAP estimation is offered, such that teacher predictions enable instance-specific regularization and the importance of predictive diversity in addition to predictive uncertainty is suggested.
Real-Time Uncertainty Estimation in Computer Vision via Uncertainty-Aware Distribution Distillation
TLDR
This work proposes a simple, easy-to-optimize distillation method for learning the conditional predictive distribution of a pre-trained dropout model for fast, sample-free uncertainty estimation in computer vision tasks, and demonstrates it can significantly reduce the inference time, enabling real-time uncertainty quantification.
Deep Adaptive Inference Networks for Single Image Super-Resolution
TLDR
The AdaDSR involves an SISR model as backbone and a lightweight adapter module which takes image features and resource constraint as input and predicts a map of local network depth, which can be flexibly tuned to meet a range of efficiency constraints.
AIM 2020 Challenge on Learned Image Signal Processing Pipeline
TLDR
This paper reviews the second AIM learned ISP challenge and provides the description of the proposed solutions and results, defining the state-of-the-art for practical image signal processing pipeline modeling.
Confidence Calibration for Convolutional Neural Networks Using Structured Dropout
TLDR
This paper uses the SVHN, CIFar-10 and CIFAR-100 datasets to empirically compare model diversity and confidence errors obtained using various dropout techniques, and shows the merit of structured dropout in a Bayesian active learning application.
NTIRE 2022 Challenge on Super-Resolution and Quality Enhancement of Compressed Video: Dataset, Methods and Results
TLDR
This paper reviews the NTIRE 2022 challenge, which aims at enhancing the videos compressed by HEVC at a fixed QP, and proposed the LDV 2.0 dataset, which includes theLDV dataset (240 videos) and 95 additional videos.
Learning the Distribution: A Unified Distillation Paradigm for Fast Uncertainty Estimation in Computer Vision
TLDR
A unified distillation paradigm for learning the conditional predictive distribution of a pre-trained dropout model for fast uncertainty estimation of both aleatoric and epistemic uncertainty at the same time is proposed.
...
...