No-reference image quality assessment based on visual explanation images and deep transfer learning
Abstract
Quantifying image quality in the absence of a reference image continues to be a challenge despite the introduction of numerous no-reference image quality assessments (NR-IQA) in recent years. Unlike most existing NRIQA methods, this paper proposes an efficient NR-IQA method based on deep visual interpretations. Specifically, the main components of the proposed method are: i) generating a pseudo-reference image (PRI) for the input distorted images, ii) employing a pretrained convolutional network to extract feature maps from the distorted image and the corresponding PRI, iii) producing visual explanation images (VEIs) by using the feature maps of the distorted image and the corresponding PRI, iv) measuring the similarity between the two VEIs using an image similarity metric, and v) employing a non-linear mapping function for quality score alignment. In our experiments, we evaluated the efficacy of the proposed method across various forms of distortion using four benchmark datasets (LIVE, SIQAD, CSIQ, and TID2013). The proposed approach demonstrates parity with the latest methods, as evidenced by comparisons with both hand-crafted NR-IQA and deep learning-based approaches.
Keywords
Deep learning; Grad-CAM; Image quality assessment; Pseudo-reference; Similarity measures; Visual interpretations
Full Text:
PDFDOI: http://doi.org/10.11591/ijeecs.v36.i3.pp1521-1531
Refbacks
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Indonesian Journal of Electrical Engineering and Computer Science (IJEECS)
p-ISSN: 2502-4752, e-ISSN: 2502-4760
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).