Marcus Edel, Machine Learning Engineer, and Aaron Boxer, Senior Software Developer, both of Collabora, present the “Super Resolution on Resource Constrained Devices” tutorial at the May 2021 Embedded Vision Summit.
Internet video streaming has recently experienced tremendous growth, but delivery quality remains critically dependent on network bandwidth. To mitigate bandwidth limitations, most video is compressed, resulting in image artifacts, noise, and blur. Quality is also degraded by image upscaling, which is required to match the very high pixel density of mobile devices. Scientists have developed many upscaling techniques, such as Lanczos resampling, but for over 20 years, no fundamentally new methods were introduced.
This situation is changing now thanks to a new class of techniques known as deep learning super-resolution (DLSR). Despite their excellent performance, DLSR methods cannot be easily applied to real-world applications due to their heavy computational requirements. In this talk, Edel and Boxer present their accurate and lightweight network for video super-resolution.
See here for a PDF of the slides.