“High-fidelity Conversion of Floating-point Networks for Low-precision Inference Using Distillation with Limited Data,” a Presentation from Imagination Technologies

James Imber, Senior Research Engineer at Imagination Technologies, presents the “High-fidelity Conversion of Floating-point Networks for Low-precision Inference Using Distillation with Limited Data” tutorial at the May 2021 Embedded Vision Summit.

When converting floating-point networks to low-precision equivalents for high-performance inference, the primary objective is to maximally compress the network whilst maintaining fidelity to the original, floating-point network. This is made particularly challenging when only a reduced or unlabelled dataset is available. Data may be limited for reasons of a commercial or legal nature: for example, companies may be unwilling to share valuable data and labels that represent a substantial investment of resources; or the collector of the original dataset may not be permitted to share it for data privacy reasons.

Imber presents a method based on distillation that allows high-fidelity, low-precision networks to be produced for a wide range of different network types, using the original trained network in place of a labeled dataset. The proposed approach is directly applicable across multiple domains (e.g. classification, segmentation and style transfer) and can be adapted to numerous network compression techniques.

See here for a PDF of the slides.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top