Active Fine-Tuning From gMAD Examples Improves Blind Image Quality Assessment

@article{Wang2022ActiveFF,
  title={Active Fine-Tuning From gMAD Examples Improves Blind Image Quality Assessment},
  author={Zhihua Wang and Kede Ma},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2022},
  volume={44},
  pages={4577-4590}
}
  • Zhihua Wang, Kede Ma
  • Published 8 March 2020
  • Computer Science
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
The research in image quality assessment (IQA) has a long history, and significant progress has been made by leveraging recent advances in deep neural networks (DNNs). Despite high correlation numbers on existing IQA datasets, DNN-based models may be easily falsified in the group maximum differentiation (gMAD) competition. Here we show that gMAD examples can be used to improve blind IQA (BIQA) methods. Specifically, we first pre-train a DNN-based BIQA model using multiple noisy annotators, and… 

Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and Wild

TLDR
A unified BIQA model is developed and an approach of training it for both synthetic and realistic distortions is proposed, and the universality of the proposed training strategy is demonstrated by using it to improve existing BIZA models.

Continual Learning for Blind Image Quality Assessment

TLDR
This paper forms continual learning for BIQA, where a model learns continually from a stream of IQA datasets, building on what was learned from previously seen data.

Troubleshooting Blind Image Quality Models in the Wild

TLDR
Inspired by recent findings that difficult samples of deep models may be exposed through network pruning, a set of "self-competitors" is constructed, as random ensembles of pruned versions of the target model to be improved.

Image Quality Assessment: Integrating Model-Centric and Data-Centric Approaches

TLDR
A computational framework that integrates model-centric and data-centric IQA is described, and computational modules to quantify the sampling-worthiness of candidate images based on blind IQA (BIQA) model predictions and deep content-aware features are designed.

Learning Transformer Features for Image Quality Assessment

TLDR
A unified IQA framework that utilizes CNN backbone and transformer encoder to extract features is proposed that is compatible with both FR and NR modes and allows for a joint training scheme.

Semi-Supervised Deep Ensembles for Blind Image Quality Assessment

TLDR
This work investigates a semi-supervised ensemble learning method to produce generalizable blind image quality assessment models and conducts extensive experiments to demonstrate the advantages of employing unlabeled data for BIQA, especially in model generalization and failure identification.

References

SHOWING 1-10 OF 63 REFERENCES

Blind Image Quality Assessment by Learning from Multiple Annotators

TLDR
This work develops a blind IQA (BIQA) model, and a method of training it without human ratings, and demonstrates that this model outperforms state-of-the-art BIQA models in terms of correlation with human ratings in existing databases, as well in group maximum differentiation (gMAD) competition.

Learning To Blindly Assess Image Quality In The Laboratory And Wild

TLDR
A BIQA model and an approach of training it on multiple IQA databases (of different distortion scenarios) simultaneously are developed, demonstrating that the optimized model by the proposed training strategy is effective in blindly assessing image quality in the laboratory and wild, outperforming previous BIZA methods by a large margin.

dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs

TLDR
This paper shows that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content, and learns an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model from millions of DIPs, leading to a DIP inferred quality (dipIQ) index.

RankIQA: Learning from Rankings for No-Reference Image Quality Assessment

TLDR
This work proposes a no-reference image quality assessment (NR-IQA) approach that learns from rankings (RankIQA), and demonstrates how this approach can be made significantly more efficient than traditional Siamese Networks by forward propagating a batch of images through a single network and backpropagating gradients derived from all pairs of images in the batch.

Blind Image Quality Assessment Using a Deep Bilinear Convolutional Neural Network

TLDR
A deep bilinear model for blind image quality assessment that works for both synthetically and authentically distorted images and achieves state-of-the-art performance on both synthetic and authentic IQA databases is proposed.

NIMA: Neural Image Assessment

TLDR
The proposed approach relies on the success (and retraining) of proven, state-of-the-art deep object recognition networks and can be used to not only score images reliably and with high correlation to human perception, but also to assist with adaptation and optimization of photo editing/enhancement algorithms in a photographic pipeline.

Waterloo Exploration Database: New Challenges for Image Quality Assessment Models

TLDR
This work establishes a large-scale database named the Waterloo Exploration Database, which in its current state contains 4744 pristine natural images and 94 880 distorted images created from them, and presents three alternative test criteria to evaluate the performance of IQA models, namely, the pristine/distorted image discriminability test, the listwise ranking consistency test, and the pairwise preference consistency test.

Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment

TLDR
A deep neural network-based approach to image quality assessment (IQA) that allows for joint learning of local quality and local weights in an unified framework and shows a high ability to generalize between different databases, indicating a high robustness of the learned features.

On the use of deep learning for blind image quality assessment

TLDR
The best proposal, named DeepBIQ, estimates the image quality by average-pooling the scores predicted on multiple subregions of the original image, having a linear correlation coefficient with human subjective scores of almost 0.91.

Two-Stream Convolutional Networks for Blind Image Quality Assessment

TLDR
A new deep neural network to predict the image quality accurately without relying on the reference image is described and the proposed algorithm outperforms the state-of-the-art methods, which verifies the effectiveness of the network architecture.
...