“Deep-learning-based Visual Perception in Mobile and Embedded Devices: Opportunities and Challenges,” a Presentation from Qualcomm

Jeff Gehlhaar, Vice President of Technology, Corporate Research and Development, at Qualcomm, presents the "Deep-learning-based Visual Perception in Mobile and Embedded Devices: Opportunities and Challenges" tutorial at the May 2015 Embedded Vision Summit.

Deep learning approaches have proven extremely effective for a range of perceptual tasks, including visual perception. Incorporating deep-learning-based visual perception into devices such as robots, automobiles and smartphones enable these machines to become much more intelligent and intuitive. And, while some applications can rely on the enormous compute power available in the cloud, many systems require local intelligence for various reasons. In these applications, the enormous computing requirements of deep-learning-based vision, creates unique challenges related to power and efficiency.

In this talk, Jeff explores applications and use cases where on-device deep-learning-based visual perception provides great benefits. He dives deeply into the challenges that these applications face, and explores techniques to overcome them.

Here you’ll find a wealth of practical technical insights and expert advice to help you bring AI and visual intelligence into your products without flying blind.

Contact

Address

Berkeley Design Technology, Inc.
PO Box #4446
Walnut Creek, CA 94596

Phone
Phone: +1 (925) 954-1411
Scroll to Top