6533b838fe1ef96bd12a47a1
RESEARCH PRODUCT
Divisive normalization image quality metric revisited.
Jordi Munoz-mariJesús MaloValero Laparrasubject
Computer sciencebusiness.industryImage qualityMachine visionPoolingNormalization (image processing)Wavelet transformImage processingImage enhancementMachine learningcomputer.software_genreAtomic and Molecular Physics and OpticsImage contrastElectronic Optical and Magnetic MaterialsOpticsComputer Vision and Pattern RecognitionArtificial intelligencebusinesscomputerdescription
Structural similarity metrics and information-theory-based metrics have been proposed as completely different alternatives to the traditional metrics based on error visibility and human vision models. Three basic criticisms were raised against the traditional error visibility approach: (1) it is based on near-threshold performance, (2) its geometric meaning may be limited, and (3) stationary pooling strategies may not be statistically justified. These criticisms and the good performance of structural and information-theory-based metrics have popularized the idea of their superiority over the error visibility approach. In this work we experimentally or analytically show that the above criticisms do not apply to error visibility metrics that use a general enough divisive normalization masking model. Therefore, the traditional divisive normalization metric 1 is not intrinsically inferior to the newer approaches. In fact, experiments on a number of databases including a wide range of distortions show that divisive normalization is fairly competitive with the newer approaches, robust, and easy to interpret in linear terms. These results suggest that, despite the criticisms of the traditional error visibility approach, divisive normalization masking models should be considered in the image quality discussion.
year | journal | country | edition | language |
---|---|---|---|---|
2010-04-03 | Journal of the Optical Society of America. A, Optics, image science, and vision |