RankIQA: Learning from Rankings for No-Reference Image Quality Assessment

  title={RankIQA: Learning from Rankings for No-Reference Image Quality Assessment},
  author={Xialei Liu and Joost van de Weijer and Andrew D. Bagdanov},
  journal={2017 IEEE International Conference on Computer Vision (ICCV)},
We propose a no-reference image quality assessment (NR-IQA) approach that learns from rankings (RankIQA. [] Key Method These ranked image sets can be automatically generated without laborious human labeling. We then use fine-tuning to transfer the knowledge represented in the trained Siamese Network to a traditional CNN that estimates absolute image quality from single images.

Figures and Tables from this paper

Learning from Rankings with Multi-level Features for No-Reference Image Quality Assessment
A framework for NR-IQA based on transferring learning from the Siamese network to the traditional CNNs by exploiting features from multiple layers is proposed, validating the effectiveness of this transfer-learning framework when considering multi-level information.
Generalizable No-Reference Image Quality Assessment via Deep Meta-Learning
An optimization-based meta-learning is proposed to learn the generalized NR-IQA model, which can be directly used to evaluate the quality of images with unseen distortions and outperforms the state-of-the-arts in terms of both evaluation performance and generalization ability.
Active Fine-Tuning from gMAD Examples Improves Blind Image Quality Assessment
  • Zhihua Wang, Kede Ma
  • Computer Science
    IEEE transactions on pattern analysis and machine intelligence
  • 2021
This work first pre-train a DNN-based BIQA model using multiple noisy annotators, and fine-tune it on multiple subject-rated databases of synthetically distorted images, resulting in a top-performing baseline model and demonstrates the feasibility of the active learning scheme on a large-scale unlabeled image set.
No-Reference Image Sharpness Assessment Based on Rank Learning
A Siamese mobilenet network is trained by learning quality ranks among the synthetically blurred and unsharpen seed images without any human label, which provides effective prior knowledge about the appropriate image sharpness.
Learning from Synthetic Data for Opinion-free Blind Image Quality Assessment in the Wild
An opinion-free BIQA method that learns from synthetically- Distorted images and multiple agents to assess the perceptual quality of authentically-distorted ones captured in the wild without relying on human labels is proposed.
No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency
A novel model that leverages self-consistency as a source of self-supervision to improve the robustness of NR-IQA models is proposed and enforce self- Consistency between the outputs of the quality assessment model for each image and its transformation to reduce the uncertainty of the model.
Controllable List-wise Ranking for Universal No-reference Image Quality Assessment
This paper presents an imaging-heuristic approach, in which the over-underexposure is formulated as an inverse of Weber-Fechner law, and fusion strategy and probabilistic compression are adopted, to generate the degraded real-world images that are associated with quality ranking information.
Hallucinated-IQA: No-Reference Image Quality Assessment via Adversarial Learning
A hallucination-guided quality regression network is proposed to address the issue of no-reference image quality assessment, and significantly outperforms all the previous state-of-the-art methods by large margins.
Learning To Blindly Assess Image Quality In The Laboratory And Wild
A BIQA model and an approach of training it on multiple IQA databases (of different distortion scenarios) simultaneously are developed, demonstrating that the optimized model by the proposed training strategy is effective in blindly assessing image quality in the laboratory and wild, outperforming previous BIZA methods by a large margin.


Learning to Rank for Blind Image Quality Assessment
This paper explores and exploits preference image pairs such as the quality of image Ia is better than that of image Ib for training a robust BIQA model and investigates the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution.
Group MAD Competition? A New Methodology to Compare Objective Image Quality Models
  • Kede Ma, Q. Wu, Lei Zhang
  • Computer Science
    2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2016
A new mechanism, namely group MAximum Differentiation (gMAD) competition, which automatically selects subsets of image pairs from the database that provide the strongest test to let the IQA models compete with each other, is proposed.
On the use of deep learning for blind image quality assessment
The best proposal, named DeepBIQ, estimates the image quality by average-pooling the scores predicted on multiple subregions of the original image, having a linear correlation coefficient with human subjective scores of almost 0.91.
Convolutional Neural Networks for No-Reference Image Quality Assessment
A Convolutional Neural Network is described to accurately predict image quality without a reference image to achieve state of the art performance on the LIVE dataset and shows excellent generalization ability in cross dataset experiments.
Image Quality Assessment Using Similar Scene as Reference
It is shown that non-aligned image with similar scene could be well used for reference, using a proposed Dual-path deep Convolutional Neural Network (DCNN), and analysis indicates that the model captures the scene structural information and non-structural information “naturalness” between the pair for quality assessment.
A deep neural network for image quality assessment
This paper presents a no reference image (NR) quality assessment (IQA) method based on a deep convolutional neural network (CNN). The CNN takes unpreprocessed image patches as an input and estimates
Unsupervised feature learning framework for no-reference image quality assessment
This paper uses raw image patches extracted from a set of unlabeled images to learn a dictionary in an unsupervised manner and uses soft-assignment coding with max pooling to obtain effective image representations for quality estimation.
A Learning-to-Rank Approach for Image Color Enhancement
This work forms the color enhancement task as a learning-to-rank problem in which ordered pairs of images are used for training, and then various color enhancements of a novel input image can be evaluated from their corresponding rank values.
Learning to compare image patches via convolutional neural networks
This paper shows how to learn directly from image data a general similarity function for comparing image patches, which is a task of fundamental importance for many computer vision problems, and opts for a CNN-based model that is trained to account for a wide variety of changes in image appearance.
FSIM: A Feature Similarity Index for Image Quality Assessment
A novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features.