Jonathan Su, CEO of Pilot AI, presents the “Pilot AI Vision Framework: From Doorbells to Defense” tutorial at the May 2018 Embedded Vision Summit.

Pilot AI’s Vision Framework has enabled real-time detection, classification and tracking in thousands of devices, from consumer applications to federal contracts. Though diverse in end-user application, these use cases all share a common disadvantage: they are compute-constrained. Small consumer electronics are compute-constrained as BoM cost limits the amount of silicon that can be integrated, whereas federal use cases are compute-constrained since the problem (seeing from thousands of feet in the sky) requires processing a tremendous amount of data in real-time without reliable network connectivity.

Scaling a single framework to enable the diverse set of hardware platforms these applications represent, from ultra-low power DSPs and microcontrollers to full-size GPUs, is what differentiates Pilot AI’s Vision Framework. Su introduces Pilot AI’s deep learning-based computer vision framework for compute-constrained devices and demonstrates this framework in real-world applications to motivate the drive towards embedded deep learning.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.



1646 N. California Blvd.,
Suite 360
Walnut Creek, CA 94596 USA

Phone: +1 (925) 954-1411
Scroll to Top