• Corpus ID: 235253978

Blind Quality Assessment for in-the-Wild Images via Hierarchical Feature Fusion and Iterative Mixed Database Training

@article{Sun2021BlindQA,
  title={Blind Quality Assessment for in-the-Wild Images via Hierarchical Feature Fusion and Iterative Mixed Database Training},
  author={Wei Sun and Xiongkuo Min and Guangtao Zhai and Siwei Ma},
  journal={ArXiv},
  year={2021},
  volume={abs/2105.14550}
}
Image quality assessment (IQA) is very important for both endusers and service-providers since a high-quality image can significantly improve the user’s quality of experience (QoE). Most existing blind image quality assessment (BIQA) models were developed for synthetically distorted images, however, they perform poorly on in-the-wild images, which are widely existed in various practical applications. In this paper, we propose a novel BIQA model for in-the-wild images by addressing two critical… 

Deep Superpixel-based Network for Blind Image Quality Assessment

A deep adaptive superpixel-based network, namely DSN-IQA, to assess the quality of image based on multi-scale and superpixel segmentation, which is highly competitive with other methods when assessing challenging authentic image databases.

Feature Grouping for No-reference Image Quality Assessment

  • Yanan DaiZijun Gao Zheyi Li
  • Computer Science
    2022 7th International Conference on Automation, Control and Robotics Engineering (CACRE)
  • 2022
The experimental results show that the proposed grouping architecture model (FGIQNet) outperforms other advanced NR-IQA models in four IQA databases of synthetic distortion and real distortion, and achieves excellent performance in cross-database evaluation, which proves the effectiveness and generalizability of the proposed model.

Learning from Synthetic Data for Opinion-free Blind Image Quality Assessment in the Wild

An opinion-free BIQA method that learns from synthetically- Distorted images and multiple agents to assess the perceptual quality of authentically-distorted ones captured in the wild without relying on human labels is proposed.

A Deep Learning based No-reference Quality Assessment Model for UGC Videos

A very simple but effective UGC VQA model is proposed, which tries to address this problem by training an end-to-end spatial feature extraction network to directly learn the quality-aware spatial feature representation from raw pixels of the video frames.

Deep Learning Based Full-Reference and No-Reference Quality Assessment Models for Compressed UGC Videos

The proposed deep learning based video quality assessment (VQA) framework achieves the best performance among the stateof-the-art FR and NR V QA models on the Compressed UGC VQA database and also achieves pretty good performance on the in- the-wild UGC vQA databases.

No-reference image quality assessment with multi-scale weighted residuals and channel attention mechanism

A multi-scale residual CNN with an attention mechanism (MsRCANet) is proposed for NR-IQA, which has good generalization ability and can be compared with the most advanced methods.

Blind Surveillance Image Quality Assessment via Deep Neural Network Combined with the Visual Saliency

A saliency-based deep neural network is proposed for the blind quality assessment of the SIs, which helps IVSS to filter the low-quality SIs and improve the detection and recognition performance.

No-Reference Quality Assessment for Colored Point Cloud and Mesh Based on Natural Scene Statistics

Quality-aware features are extracted from the aspects of color and geometry directly from the3D models and the statistic parameters are estimated using different distribution models to describe the characteristic of the 3D models.

Subjective Quality Assessment for Images Generated by Computer Graphics

A large-scale subjective CG-IQA database is established and several popular no-reference image quality assessment methods are tested, showing that the handcrafted-based methods achieve low correlation with subjective judgment and deep learning- based methods obtain relatively better performance.

Screen Content Quality Assessment: Overview, Benchmark, and Beyond

The background, history, recent progress, and future of the emerging screen content quality assessment research are provided, including an overview of the most technology-oriented part of QoE modeling.

References

SHOWING 1-10 OF 59 REFERENCES

Learning To Blindly Assess Image Quality In The Laboratory And Wild

A BIQA model and an approach of training it on multiple IQA databases (of different distortion scenarios) simultaneously are developed, demonstrating that the optimized model by the proposed training strategy is effective in blindly assessing image quality in the laboratory and wild, outperforming previous BIZA methods by a large margin.

dipIQ: Blind Image Quality Assessment by Learning-to-Rank Discriminable Image Pairs

This paper shows that a vast amount of reliable training data in the form of quality-discriminable image pairs (DIPs) can be obtained automatically at low cost by exploiting large-scale databases with diverse image content, and learns an opinion-unaware BIQA (OU-BIQA, meaning that no subjective opinions are used for training) model from millions of DIPs, leading to a DIP inferred quality (dipIQ) index.

Blindly Assess Image Quality in the Wild Guided by a Self-Adaptive Hyper Network

This work proposes a self-adaptive hyper network architecture to blind assess image quality in the wild, which outperforms the state-of-the-art methods on challenging authentic image databases but also achieves competing performances on synthetic image databases, though it is not explicitly designed for the synthetic task.

Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and Wild

A unified BIQA model is developed and an approach of training it for both synthetic and realistic distortions is proposed, and the universality of the proposed training strategy is demonstrated by using it to improve existing BIZA models.

Learning a No-Reference Quality Assessment Model of Enhanced Images With Big Data

A new no-reference (NR) IQA model is developed and a robust image enhancement framework is established based on quality optimization, which can well enhance natural images, low-contrast images,Low-light images, and dehazed images.

Unified Quality Assessment of in-the-Wild Videos with Mixed Datasets Training

A mixed datasets training strategy for training a single VQA model with multiple datasets is explored and the superior performance of the unified model in comparison with the state-of-the-art models is proved.

Learning without Human Scores for Blind Image Quality Assessment

The proposed QAC based BIQA method not only has comparable accuracy to those methods using human scored images in learning, but also has merits such as high linearity to human perception of image quality, real-time implementation and availability of image local quality map.

A Feature-Enriched Completely Blind Image Quality Evaluator

The proposed opinion-unaware BIQA method does not need any distorted sample images nor subjective quality scores for training, yet extensive experiments demonstrate its superior quality-prediction performance to the state-of-the-art opinion-aware BIZA methods.

Blind Image Quality Assessment Using a Deep Bilinear Convolutional Neural Network

A deep bilinear model for blind image quality assessment that works for both synthetically and authentically distorted images and achieves state-of-the-art performance on both synthetic and authentic IQA databases is proposed.

DeepSim: Deep similarity for image quality assessment

...