Learning-Based Multi-Frame Video Quality Enhancement

IEEE 2019 ICIP presentation learning-based multi-frame video quality enhancement visionular This paper was presented by Junchao Tong, Xilin Wu, Dandan Ding, Zheng Zhu, and Zoe Liu, “Learning-Based Multi-Frame Video Quality Enhancement,” in the Proceedings of the IEEE International Conference on Image Processing (ICIP), September 22-25, 2019 in Taipei, Taiwan. The convolution neural network (CNN) has shown great success in video quality enhancement. Existing methods mainly conduct enhancement tasks in the spatial domain, exploring the pixel correlations within one frame. Taking advantage of the similarity across successive frames, this paper demonstrates a learning-based multi-frame approach, with an aim to explore the greatest potential for video quality...

Continue reading

Bi-Prediction Based Video Quality Enhancement via Learning

This paper was presented by Dandan Ding, Wenyu Wang, Junchao Tong, Xinbo Gao, Zoe Liu, and Yong Fang, “Bi-Prediction Based Video Quality Enhancement via Learning”, IEEE Transactions on Cybernetics, June 17, 2020. Convolutional neural networks (CNNs)-based video quality enhancement generally employs optical flow for pixel-wise motion estimation and compensation, followed by utilizing motion-compensated frames and jointly exploring the spatiotemporal correlation across frames to facilitate the enhancement. This method, called the optical-flow-based method (OPT), usually achieves high accuracy at the expense of high computational complexity. In this article, we develop a new framework, referred to as bi-prediction-based multi-frame video enhancement (PMVE), to achieve a...

Continue reading