Deep Deinterlacing
Date & Time
Wednesday, November 11, 2020, 5:15 PM - 5:45 PM
Session Type
Technical session
Michael Bernasconi Abdelaziz Djelouah

With the increased demands in streaming services today, existing catalog contents are finding new exposures to large audiences. However, catalog contents and other old footage are generally only available in interlaced format. Interlacing has traditionally been used to double the perceived frame rate by a subsampling strategy that alternatively samples even and odd rows without consuming additional bandwidth. This allowed to enhance motion perception and to reduce visible flicker, but was designed to be captured and displayed in the same interlaced format. While old CRT displays can display interlaced video directly, modern TV displays typically use progressive scanning. This has more recently increased the interest in high quality deinterlacing algorithms. Deinterlacing can be considered as an ill-posed inverse problem with the goal of reconstructing an original input signal. As such it is challenging to solve, which becomes most apparent in the cases of large motion. Interestingly however, we can easily describe the degradation incurred through the subsampling strategy employed in interlacing, and therefore it is an ideal candidate for a fully supervised deep learning approach. Despite recent successes of machine learning for other tasks such as upscaling and denoising, deinterlacing has been explored relatively little and the early solutions that employ learning do not manage to consistently outperform existing deinterlacing methods already established in the industry. In this paper, we aim to close this gap by proposing a novel approach to deep video deinterlacing. Our approach addresses previous shortcomings and leverages temporal information more coherently. In addition to describing our architecture and training process in detail, we also include an ablation study that shows the impact of our individual architectural choices. Last but not least we also conduct a detailed objective evaluation comparing our approach to existing industry solutions as well as earlier learning based methods and show that we can consistently achieve significantly improved quality scores.

Technical Depth of Presentation
What Attendees will Benefit Most from this Presentation
Engineers, Researchers, Technologists
Take-Aways from this Presentation
1. Proposed neural network structure for deinterlacing model 2. How Temporal information is used and its benefits in deep deinterlacing 3. State of the art results compared to existing methods