“Moving CNNs from Academic Theory to Embedded Reality,” a Presentation from Synopsys

Tom Michiels, System Architect for Embedded Vision Processors at Synopsys, presents the "Moving CNNs from Academic Theory to Embedded Reality" tutorial at the May 2017 Embedded Vision Summit.

In this presentation, you will learn to recognize and avoid the pitfalls of moving from an academic CNN/deep learning graph to a commercial embedded vision design. You will also learn about the cost vs. accuracy trade-offs of CNN bit width, about balancing internal memory size and external memory bandwidth, and about the importance of keeping data local to the CNN processor to improve bandwidth. Michiels also walks through an example customer design for a power- and cost-sensitive automotive scene segmentation application that requires high flexibility to adapt to future CNN graph evolutions.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top