A system and method to predict the perceptual quality of a compressed video by deploying a self-reference technique is disclosed. The method includes the steps of computing a frame difference image from the luminance component of at least one other alternate frame of an input test image. A blurred frame and a blurred frame difference image are then obtained by low-pass filtering of the input and frame difference images. A divisive normalization operator is applied on the four types of images independently and a generalized Gaussian distribution GGD fitted. Spatial features and temporal features are then extracted from the GGD. Absolute differences between the spatial and temporal features are computed and weighted based on motion in a given frame in the video. These features are pooled over different patches across the frames to obtain a final video quality score Q. The method shows superior results when compared to existing methods, while being computationally simple.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Inguva, Sasi; Chen, Chao; and Kokaram, Anil, "A No-Reference Video Quality Predictor For Compressed Videos", Technical Disclosure Commons, (May 01, 2017)