Image Quality Assessment Using Contrastive Learning

@article{Madhusudana2022ImageQA,
  title={Image Quality Assessment Using Contrastive Learning},
  author={Pavan C. Madhusudana and Neil Birkbeck and Yilin Wang and Balu Adsumilli and Alan Conrad Bovik},
  journal={IEEE Transactions on Image Processing},
  year={2022},
  volume={31},
  pages={4149-4161}
}
We consider the problem of obtaining image quality representations in a self-supervised manner. We use prediction of distortion type and degree as an auxiliary task to learn features from an unlabeled image dataset containing a mixture of synthetic and realistic distortions. We then train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem. We refer to the proposed training framework and resulting deep IQA model as the CONTRastive… 

Figures and Tables from this paper

Image Quality Assessment using Synthetic Images

It is shown that this model achieves comparable performance to state-of-the-art NR image quality models when evaluated on real images afflicted with synthetic distortions, even without using any real images during training.

CONVIQT: Contrastive Video Quality Estimator

This work considers the problem of learning perceptually relevant video quality representations in a self-supervised manner, and indicates that compelling representations with perceptual bearing can be obtained using self- supervised learning.

No-Reference Image Quality Assessment with Convolutional Neural Networks and Decision Fusion

  • D. Varga
  • Computer Science
    Applied Sciences
  • 2021
A novel, deep learning-based NR-IQA architecture that relies on the decision fusion of multiple image quality scores coming from different types of convolutional neural networks to better characterize authentic image distortions than a single network.

Multiview Contrastive Learning for Completely Blind Video Quality Assessment of User Generated Content

This work presents a self-supervised multiview contrastive learning framework to learn spatio-temporal quality representations and captures the common information between frame differences and frames by treating them as a pair of views and similarly obtain the shared representations between frame Differences and optical flow.

Blind Image Quality Assessment for Authentic Distortions by Intermediary Enhancement and Iterative Training

This paper identifies that two challenges caused by distribution shift and long-tailed distribution lead to the compromised performance on low-quality images and proposes an intermediary enhancement-based bilateral network with iterative training strategy for solving these two challenges.

HVS Revisited: A Comprehensive Video Quality Assessment Framework

A no-reference V QA framework called HVS-5M (NRVQA framework with five modules simulating HVS with representative characteristics, and further reorganizes their connections) is proposed, which outperforms the state-of-the-art VQA methods.

Contrastive distortion‐level learning‐based no‐reference image‐quality assessment

Experimental results show that by comparing on many NR‐IQA data sets the proposed method can outperform state‐of‐the‐art methods.

Semisupervised Few-Shot Remote Sensing Image Classification Based on KNN Distance Entropy

  • Xuewei ChaoYang Li
  • Computer Science
    IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
  • 2022
A novel data information quality assessment method, called K-nearest neighbor (KNN) distance entropy, is proposed to screen and evaluate remote sensing images under few-shot conditions, and inspires the data-efficient few- shot learning based on high-quality data in the remote sensing field.

BVI-VFI: A Video Quality Database for Video Frame Interpolation

A new video quality database named BVI-VFI is developed, which contains 540 distorted sequences generated by applying commonly used VFI algorithms to 36 diverse source videos with various spatial resolutions and frame rates, and demonstrates the urgent requirement for more accurate bespoke quality assessment methods for VFI.

Telepresence Video Quality Assessment

This work has created a first-of-a-kind online video quality prediction framework for live streaming, using a multi-modal learning framework with separate pathways to visual and audio quality predictions, able to provide accurate quality predictions at the patch, frame, clip, and audiovisual levels.

References

SHOWING 1-10 OF 62 REFERENCES

Blind Image Quality Assessment Using a Deep Bilinear Convolutional Neural Network

A deep bilinear model for blind image quality assessment that works for both synthetically and authentically distorted images and achieves state-of-the-art performance on both synthetic and authentic IQA databases is proposed.

A Probabilistic Quality Representation Approach to Deep Blind Image Quality Prediction

The proposed PQR method is shown to not only speed up the convergence of deep model training, but to also greatly improve the achievable level of quality prediction accuracy relative to scalar quality score regression methods.

Fully Deep Blind Image Quality Predictor

A blind image evaluator based on a convolutional neural network (BIECON) is proposed that follows the FR-IQA behavior using the local quality maps as intermediate targets for conventional neural networks, which leads to NR- IQA prediction accuracy that is comparable with that of state-of-the-art FR-iqA methods.

Convolutional Neural Networks for No-Reference Image Quality Assessment

A Convolutional Neural Network is described to accurately predict image quality without a reference image to achieve state of the art performance on the LIVE dataset and shows excellent generalization ability in cross dataset experiments.

Context Encoders: Feature Learning by Inpainting

It is found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures, and can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.

DeepFL-IQA: Weak Supervision for Deep IQA Feature Learning

This work proposes a new IQA dataset and a weakly supervised feature learning approach to train features more suitable for IQA of artificially distorted images, and introduces a benchmark database, KADID-10k, of artificially degraded images, each subjectively annotated by 30 crowd workers.

Blindly Assess Image Quality in the Wild Guided by a Self-Adaptive Hyper Network

This work proposes a self-adaptive hyper network architecture to blind assess image quality in the wild, which outperforms the state-of-the-art methods on challenging authentic image databases but also achieves competing performances on synthetic image databases, though it is not explicitly designed for the synthetic task.

Unsupervised feature learning framework for no-reference image quality assessment

This paper uses raw image patches extracted from a set of unlabeled images to learn a dictionary in an unsupervised manner and uses soft-assignment coding with max pooling to obtain effective image representations for quality estimation.

Dynamic Receptive Field Generation for Full-Reference Image Quality Assessment

A novel FR-IQA framework that dynamically generates receptive fields responsive to distortion type is proposed that achieves state-of-the-art prediction accuracy on various open IQA databases.

PieAPP: Perceptual Image-Error Assessment Through Pairwise Preference

A new learning-based method that is the first to predict perceptual image error like human observers, and significantly outperforms existing algorithms, beating the state-of-the-art by almost 3× on the authors' test set in terms of binary error rate, while also generalizing to new kinds of distortions, unlike previous learning- based methods.
...