“Knowledge Distillation of Convolutional Neural Networks,” a Presentation from Bending Spoons

Federico Perazzi, Head of AI at Bending Spoons, presents the “Knowledge Distillation of Convolutional Neural Networks” tutorial at the May 2022 Embedded Vision Summit.

Convolutional neural networks are ubiquitous in academia and industry, especially for computer vision and language processing tasks. However, their superior ability to learn meaningful representations in large-scale data comes at a price—they are often over-parameterized, with millions of parameters yielding additional latency and unnecessary costs when deployed in production.

In this talk, Perazzi presents the foundations of knowledge distillation, an essential tool for improving the performance of neural networks by compressing their size. Knowledge distillation entails training a lightweight model, referred to as the student, to replicate a pre-trained larger model, called the teacher. He illustrates how this process works in detail by presenting a real-world image restoration task that Bending Spoons recently worked on. By squeezing the knowledge of the teacher model, the company obtained a threefold speedup and improved the quality of the reconstructed images.

See here for a PDF of the slides.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top