The growing popularity of video communication with applications as broad as entertainment, education and business, increases the demand of end users for higher quality videos. However, in many networked video applications, the video streams may be corrupted during transmission due to either physical channel bit errors, or packet losses due to congestion and delay, etc. Ultimately, various types of channel impairment all lead to loss of video packets. Being able to quantify the quality of packet-loss-impaired video, especially using automatic assessment method, is very important for network service design and provision.
Traditional objective approaches to video quality assessment such as peak signal to noise ratio (PSNR) and mean square error (MSE), consider the error at all pixels equally, and do not capture perceptual quality very well, especially in the presence of packet loss. In reality, packet losses have different visual impacts. For example, one may happen in visual salient area while another may hide in background or uniform texture area; one may occur in the center of an active scene while another is in a motionless area. Not all such packet-loss-induced errors have the same effect for average human viewers. Thus, the problem of evaluating perceptual video quality affected by packet loss is challenging.
In human vision system (HVS), there have been physiological and psychological evidence that human beings do not pay equal attention to all exposed visual information, instead, they have excellent selectivity on what one sees in a scene. This high-resolution vision due to fixation by the observer onto a region is called foveal vision, which is also known as focus of attention (FOA) or saliency regions. In recent years, there has been a newly sparked interest in exploiting the visual attention model (VAM) or saliency detection mechanism for image and video quality assessment.
The goal of this project is to explore the properties of saliency information of image and video, and incorporate them into the design of peceptual video quality. So far, we have investigated the roles played by saliency information in the visibility of packet loss. Furthermore, we proposed several saliency-based metrics to model the perceptual quality of images/videos affected by packet losses.
This project is supported by WICAT.