• Corpus ID: 244773162

Learning Transformer Features for Image Quality Assessment

@article{Zeng2021LearningTF,
  title={Learning Transformer Features for Image Quality Assessment},
  author={Chao Zeng and Sam Tak Wu Kwong},
  journal={ArXiv},
  year={2021},
  volume={abs/2112.00485}
}
Objective image quality evaluation is a challenging task, which aims to measure the quality of a given image automatically. According to the availability of the reference images, there are Full-Reference and No-Reference IQA tasks, respectively. Most deep learning approaches use regression from deep features extracted by Convolutional Neural Networks. For the FR task, another option is conducting a statistical comparison on deep features. For all these methods, non-local information is usually… 
1 Citations

Figures and Tables from this paper

Multi-Scale Features and Parallel Transformers Based Image Quality Assessment

TLDR
This work proposes a new architecture by integrating these two promising quality assessment techniques of images, and demonstrates that the proposed in-tegration technique outperforms existing algorithms.

References

SHOWING 1-10 OF 45 REFERENCES

Perceptual Image Quality Assessment with Transformers

TLDR
An image quality transformer (IQT) is proposed that successfully applies a transformer architecture to a perceptual full-reference image quality assessment (IQA) task and uses extra learnable quality embedding and position embedding.

Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment

TLDR
A deep neural network-based approach to image quality assessment (IQA) that allows for joint learning of local quality and local weights in an unified framework and shows a high ability to generalize between different databases, indicating a high robustness of the learned features.

No-Reference Image Quality Assessment via Transformers, Relative Ranking, and Self-Consistency

TLDR
A novel model that leverages self-consistency as a source of self-supervision to improve the robustness of NR-IQA models is proposed and enforce self- Consistency between the outputs of the quality assessment model for each image and its transformation to reduce the uncertainty of the model.

MUSIQ: Multi-scale Image Quality Transformer

TLDR
A novel hash-based 2D spatial embedding and a scale embedding is proposed to support the positional embedding in the multi-scale representation of IQA, which can achieve state-of-the-art performance on multiple large scale IQA datasets.

Learning Conditional Knowledge Distillation for Degraded-Reference Image Quality Assessment

TLDR
This paper proposes a practical solution named degraded-reference IQA (DR-IQA), which exploits the inputs of IR models, degraded images, as references to extract reference information from degraded images by distilling knowledge from pristine-quality images.

Saliency-Guided Transformer Network combined with Local Embedding for No-Reference Image Quality Assessment

TLDR
A novel Saliency-Guided Transformer Network combined with Local Embedding (TranSLA) for No-Reference Image Quality Assessment (NR-IQA) and introduces a Boosting Interaction Module (BIM) to enhance feature aggregation.

Region-Adaptive Deformable Network for Image Quality Assessment

  • Shu ShiQingyan Bai Yujiu Yang
  • Computer Science
    2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
  • 2021
TLDR
The referenceoriented deformable convolution is proposed, which can improve the performance of an IQA network on GAN-based distortion by adaptively considering this misalignment and a patch-level attention module to enhance the interaction among different patch regions, which are processed independently in previous patch-based methods.

MetaIQA: Deep Meta-Learning for No-Reference Image Quality Assessment

TLDR
A no-reference IQA metric based on deep meta-learning that outperforms the state-of-the-arts by a large margin and can be easily generalized to authentic distortions, which is highly desired in real-world applications of IQA metrics.

Deep Learning-based Distortion Sensitivity Prediction for Full-Reference Image Quality Assessment

TLDR
This work uses the DeepQA as a baseline model on a challenge database that includes various distortions and improves the baseline model by dividing it into three parts and modifying each: distortion encoding network, sensitivity generation network, and score regression.

Active Fine-Tuning From gMAD Examples Improves Blind Image Quality Assessment

  • Zhihua WangKede Ma
  • Computer Science
    IEEE Transactions on Pattern Analysis and Machine Intelligence
  • 2022
TLDR
This work first pre-train a DNN-based BIQA model using multiple noisy annotators, and fine-tune it on multiple synthetically distorted images, resulting in a “top-performing” baseline model, which is then fine-tuned on the combination of human-rated images from gMAD and existing databases.