Corpus ID: 19116950

RAN4IQA: Restorative Adversarial Nets for No-Reference Image Quality Assessment

@inproceedings{Ren2018RAN4IQARA,
  title={RAN4IQA: Restorative Adversarial Nets for No-Reference Image Quality Assessment},
  author={Hongyu Ren and Diqi Chen and Yizhou Wang},
  booktitle={AAAI},
  year={2018}
}
Inspired by the free-energy brain theory, which implies that human visual system (HVS) tends to reduce uncertainty and restore perceptual details upon seeing a distorted image, we propose restorative adversarial net (RAN), a GAN-based model for no-reference image quality assessment (NR-IQA). RAN, which mimics the process of HVS, consists of three components: a restorator, a discriminator and an evaluator. The restorator restores and reconstructs input distorted image patches, while the… Expand
No-Reference Image Quality Assessment: An Attention Driven Approach
TLDR
This paper makes the first attempt to formulate the NR-IQA as a dynamic attentional process and implement it via reinforcement learning, which aims to predict the perceptual quality of a test image without referencing its pristine-quality counterpart. Expand
Active Inference of GAN for No-Reference Image Quality Assessment
TLDR
benefit from the primary content obtained from GAN and the multiple degradations measurement of CNN, the proposed NR-IQA method achieves the state-of-the-art on five public IQA databases. Expand
No-Reference Image Quality Assessment: An Attention Driven Approach
TLDR
This paper tackles no-reference image quality assessment (NR-IQA), which aims to predict the perceptual quality of a distorted image without referencing its pristine-quality counterpart, and implements an attention-driven NR- IQA method with reinforcement learning (RL). Expand
Blind Image Quality Assessment With Active Inference
TLDR
A novel BIQA metric by mimicking the active inference process of IGM achieves competitive performance on five popular IQA databases and especially in cross-database evaluations, and achieves significant improvements. Expand
Learning Conditional Knowledge Distillation for Degraded-Reference Image Quality Assessment
  • Heliang Zheng, Huan Yang, Jianlong Fu, Zhengjun Zha, Jiebo Luo
  • Computer Science, Engineering
  • ArXiv
  • 2021
TLDR
This paper proposes a practical solution named degraded-reference IQA (DR-IQA), which exploits the inputs of IR models, degraded images, as references to extract reference information from degraded images by distilling knowledge from pristine-quality images. Expand
TSPR: Deep network-based blind image quality assessment using two-side pseudo reference images
TLDR
A deep network-based blind image quality assessment (BIQA) using two-side pseudo reference (TSPR) images is presented, which delivers superior performance over the state-of-the-art NR methods. Expand
A Visual Residual Perception Optimized Network for Blind Image Quality Assessment
TLDR
A visual residual perception optimized network (VRPON) that can effectively solve blind image quality assessment problems and not only has better performance than state-of-the-art methods on synthetic distorted images, but also has better robustness for different authentic distortions. Expand
No-reference omnidirectional video quality assessment based on generative adversarial networks
  • Jiefeng Guo, Yao Luo
  • Computer Science
  • Multim. Tools Appl.
  • 2021
TLDR
A NR OVQA based on generative adversarial networks (GAN) is proposed, which is composed of the reference video generator and the quality score predictor, and the viewing direction of the omnidirectional video is incorporated to guide the quality and weight regression. Expand
Domain-Aware No-Reference Image Quality Assessment
TLDR
The domain-aware no-reference image quality assessment (DA-NR-IQA) is proposed, which for the first time exploits and disentangles the distinct representation of different degradations to access image quality. Expand
Controllable List-wise Ranking for Universal No-reference Image Quality Assessment
TLDR
This paper presents an imaging-heuristic approach, in which the over-underexposure is formulated as an inverse of Weber-Fechner law, and fusion strategy and probabilistic compression are adopted, to generate the degraded real-world images that are associated with quality ranking information. Expand
...
1
2
3
...

References

SHOWING 1-10 OF 38 REFERENCES
Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
  • C. Ledig, Lucas Theis, +6 authors W. Shi
  • Computer Science, Mathematics
  • 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
  • 2017
TLDR
SRGAN, a generative adversarial network (GAN) for image super-resolution (SR), is presented, to its knowledge, the first framework capable of inferring photo-realistic natural images for 4x upscaling factors and a perceptual loss function which consists of an adversarial loss and a content loss. Expand
Using Free Energy Principle For Blind Image Quality Assessment
TLDR
A new no-reference (NR) image quality assessment (IQA) metric is proposed using the recently revealed free-energy-based brain theory and classical human visual system (HVS)-inspired features to predict an image that the HVS perceives from a distorted image based on the free energy theory. Expand
No-Reference Image Quality Assessment in the Spatial Domain
TLDR
Despite its simplicity, it is able to show that BRISQUE is statistically better than the full-reference peak signal-to-noise ratio and the structural similarity index, and is highly competitive with respect to all present-day distortion-generic NR IQA algorithms. Expand
Blind Image Quality Assessment: From Natural Scene Statistics to Perceptual Quality
TLDR
DIIVINE is capable of assessing the quality of a distorted image across multiple distortion categories, as against most NR IQA algorithms that are distortion-specific in nature, and is statistically superior to the often used measure of peak signal-to-noise ratio (PSNR) and statistically equivalent to the popular structural similarity index (SSIM). Expand
A Psychovisual Quality Metric in Free-Energy Principle
TLDR
A new psychovisual quality metric of images is proposed based on recent developments in brain theory and neuroscience, particularly the free-energy principle and can measure correctly the visual quality of some model-based image processing algorithms, for which the competing metrics often contradict with viewers' opinions. Expand
Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment
TLDR
A deep neural network-based approach to image quality assessment (IQA) that allows for joint learning of local quality and local weights in an unified framework and shows a high ability to generalize between different databases, indicating a high robustness of the learned features. Expand
VSI: A Visual Saliency-Induced Index for Perceptual Image Quality Assessment
TLDR
Extensive experiments performed on four largescale benchmark databases demonstrate that the proposed IQA index VSI works better in terms of the prediction accuracy than all state-of-the-art IQA indices the authors can find while maintaining a moderate computational complexity. Expand
FSIM: A Feature Similarity Index for Image Quality Assessment
TLDR
A novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features. Expand
Improved Techniques for Training GANs
TLDR
This work focuses on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic, and presents ImageNet samples with unprecedented resolution and shows that the methods enable the model to learn recognizable features of ImageNet classes. Expand
Waterloo Exploration Database: New Challenges for Image Quality Assessment Models
TLDR
This work establishes a large-scale database named the Waterloo Exploration Database, which in its current state contains 4744 pristine natural images and 94 880 distorted images created from them, and presents three alternative test criteria to evaluate the performance of IQA models, namely, the pristine/distorted image discriminability test, the listwise ranking consistency test, and the pairwise preference consistency test. Expand
...
1
2
3
4
...