Development of objective quality metrics that can automatically and accurately measure perceptual video quality is becoming more and more important as video applications become pervasive. Prior work in video quality assessment is mainly concerned with applications where the frame rate of the video is fixed. The objective quality metric compares each pair of corresponding frames in deriving a similarity score or distortion between two videos with the same frame rate. In many emerging applications targeting for heterogeneous users with different display devices and/or different communication links, the same video content may be accessed with varying frame rate, frame size or quantization (assuming the video is coded into a scalable stream with spatial/temporal/SNR scalability). In applications permitting only very low bit rate video, one often has to determine whether to code an original high frame-rate video at the same frame rate but with significant quantization, or to code it at a lower frame rate with less quantization. In all proceeding scenarios as well as many others, it is important being able to objectively quantify the perceptual quality of a video that has been subjected to both quantization and frame rate reduction.
We conduct subjective tests to evaluate how frame rate and quantization artifacts influence the perceived video quality. Based on these results, we proposed a quality metric, a function of PSNR and frame rate, uses the product of a PSNR-based metric and a temporal correction factor (TCF). The first term, the sigmoidal function, assesses the quality of video based on the average PSNR of frames included in the video (not including interpolated frames), and the TCF, inverted falling exponential, reduces the quality assigned by the first metric according to the actual frame rate.
Our model has only two parameters and correlates very well with subjective possible to replace the sigmoidal function with other metrics that can more accurately access the quality of a video at the full frame rate. Also, although the proposed metric is only validated on SVC video with temporal and quality scalability, we expect the metric to be applicable to any coded videos.ratings obtained in our subjective tests with significantly high correlation. Each function has a single parameter that is video-content dependent. We are currently studying the dependency of these parameters with some content feature that can be easily derived from the underlying video. The proposed model is shown to be highly accurate, compared to the subjective ratings for a large set of test sequences. We note that it is
This project is supported by NSF and CATT.
Related Report and Publications