“Introduction to Knowledge Distillation: Smaller, Smarter AI Models for the Edge,” a Presentation from Deep Sentinel

David Selinger, CEO of Deep Sentinel, presents the “Introduction to Knowledge Distillation: Smaller, Smarter AI Models for the Edge” tutorial at the May 2025 Embedded Vision Summit.

As edge computing demands smaller, more efficient models, knowledge distillation emerges as a key approach to model compression. In this presentation, Selinger delves into the details of this process, exploring what knowledge distillation entails and the requirements for its implementation, including dataset size and tools.

Selinger examines when to use knowledge distillation, its pros and cons, and showcase examples of successfully distilled models. Based on performance data highlighting the benefits of distillation, he concludes that knowledge distillation is a powerful tool for creating smaller, smarter models that thrive at the edge.

See here for a PDF of the slides.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top