Evaluating Quantitative Metrics of Tone-Mapped Images

Subjective evaluation of tone-mapped images is tedious and time-consuming; therefore, it is desirable to have algorithms for automatic quality assessment. Many full-reference and blind metrics have been developed for this purpose, but their performance is generally evaluated on limited benchmark dat...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on image processing Vol. 31; pp. 1751 - 1760
Main Authors: Khan, Ishtiaq Rasool, Alotaibi, Theyab A., Siddiq, Asif, Bourennani, Farid
Format: Journal Article
Language:English
Published: United States IEEE 2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Subjective evaluation of tone-mapped images is tedious and time-consuming; therefore, it is desirable to have algorithms for automatic quality assessment. Many full-reference and blind metrics have been developed for this purpose, but their performance is generally evaluated on limited benchmark datasets. This leaves a possibility that the observed performance of the metric could be due to overfitting, and it might indeed not perform well for all scenes. In this work, we propose a novel framework using population-based metaheuristics to evaluate the performance of these metrics without requiring any subjectively evaluated reference dataset. The proposed algorithm does not modify the individual image pixels, instead, the tone-mapping curve is modified to synthesize realistic tone-mapped images for evaluation. Moreover, it is not required to know the underlying model of the evaluated metric, which is treated just like a black box and can be replaced by any other metric seamlessly. Therefore, any new metrics designed in the future can also be easily evaluated by simply replacing just one module in the proposed evaluation framework. We evaluate six existing metrics and synthesize images to which the metrics fail to assign appropriate scores for visual quality. We also propose a method to rank the relative performance of evaluated metrics, through a competition in which each metric tries to find the errors in the scores given by other metrics.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2022.3146640